Efficient recognition of facial expressions does not require motor simulation

  1. Gilles Vannuscorps  Is a corresponding author
  2. Michael Andres
  3. Alfonso Caramazza
  1. Department of Psychology, Harvard University, United States
  2. Institute of Neuroscience, Université catholique de Louvain, Belgium
  3. Psychological Sciences Research Institute, Université catholique de Louvain, Belgium
  4. Center for Mind/Brain Sciences, Università degli Studi di Trento, Italy
2 figures, 2 tables and 3 additional files

Figures

Results of Experiments 1–8 by individual participant.

In black: control participants; in light grey: mean of the controls; in green and yellow: IMS 8 and 10 who performed Experiments 1–5 with normotypical efficiency; in red: the nine other IMS. A small circle (°) indicates IMS participants with a ‘normotypical’ score (at least above 0.85 standard deviation below the controls’ mean performance) after control participants with an abnormally low score (below 2 SD from the other control participants) was/were discarded (indicated by an *). An asterisk (*) also indicates IMS participants with a score below two standard deviations from the mean of the controls.

Figure 2 with 3 supplements
Confusion matrices.

(A, B, C) Distribution of control participants’ (left) and IMS 8 and 10’s (right) percentage of trials in which they chose each of the six response alternatives when faced with the six displayed facial expressions in Experiment 1 (A), 2 (B), and 3 (C).

Figure 2—figure supplement 1
Confusion matrices.

(A, B, C) Distribution of control participants’ (left), IMS 8 and 10’s (middle) and the other IMS’s (right) percentage of trials in which they chose each of the six response alternatives when faced with the six displayed facial expressions in Experiment 1 (A), 2 (B), and 3 (C).

Figure 2—figure supplement 2
Analysis of participants' action units when imitating facial expressions.

IMS 8, 10 and 5 control participants were asked to imitate pictures of an actor’s face expressing one of six facial expressions (anger, disgust, fear, happiness, sadness, and surprise). Their imitation was video-recorded and, then, analyzed offline with OpenFace 2.1.0, an open source deep learning facial recognition system allowing automatic detection of action unit (AU) presence and intensity (Amos et al., 2016; Baltrusaitis et al., 2016). For each participant and facial expression, we first computed the average intensity of 12 facial action units relevant for facial expressions (AU01, 02, 04, 05, 06, 07, 09, 12, 15, 20, 23 and 26). Then, to obtain a measure of the similarity between the different facial expressions executed by each participant, we correlated the intensity of the 12 facial action units observed for the different facial expressions to each other. This allowed obtaining an objective measure of the similarity of the intensities of the different relevant action units when the IMS executed the different facial expressions. The results indicate that the intensity of the action units during the execution of the different facial expressions were highly correlated in the three IMS and much more correlated than in typical control participants. This supports the claim that although IMS 8 and 10 could execute some subtle facial movements, these movements were largely unspecific.

Figure 2—figure supplement 3
IMS 8 and 10's facial expression execution.

(A) Facial expression naming task. Control participants’ (six men, nine women, mean age = 26) were presented sequentially with a picture model from the Karolinska Directed Emotional Faces set (KDEF; Actress AF01, front view; Lundqvist et al., 1998) and video-clips of the IMS 8 and 10 (see Figure 2—figure supplement 1B-D) executing the six basic facial expressions (in rows) and were asked to choose the corresponding label (in column) among six alternatives (anger, disgust, fear, happiness, sadness, and surprise). Control participants categorized accurately the facial expressions of the picture model but erred most of the time when asked to categorize the facial expressions of the IMS. Under the (minimal) assumption that a facial expression is recognized accurately if (1) it is more often correctly than incorrectly labelled and (2) that its corresponding label is provided more often for that expression than for other ones, only the facial expression of anger in IMS8 (recognized by 36% of the control participants) has been recognized accurately by the controls. B. Facial expression sorting task. Control participants (five men, nine women) were presented simultaneously with the six facial expressions executed by a model (Actress AF01, front view from the KDEF; Lundqvist et al., 1998), by IMS8 or by IMS10 and were asked to associated the six pictures to their corresponding labels (anger, disgust, fear, happiness, sadness, and surprise). Each label could be used only once. Control participants categorized accurately the facial expressions of the picture model but erred most of the time when asked to categorize the facial expressions of the two IMS. Under the assumption that a facial expression is recognized accurately if (1) it is more often correctly than incorrectly labelled and (2) that its corresponding label is provided more often for that expression than for other ones, only the facial expressions of happiness in IMS8 (100% of the controls) and disgust, fear and surprise in IMS 10 (recognized by 43%, 64 and 50 of the control participants) have been recognized accurately by the controls.

Tables

Table 1
Summary of the IMS participants’ facial movements.
IMSInf. lipSup. lipNoseEyebrowsForeheadR. cheekL. cheekSup. R. eyelidInf. R. eyelidSup. L. eyelidInf. L. eyelid
IMS1NoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNone
IMS2SlightNoneNoneNoneNoneSlightSlightSlightSlightSlightSlight
IMS3SlightNoneNoneNoneNoneNoneNoneSlightNoneSlightNone
IMS4SlightNoneNoneNoneNoneSlightSlightNoneNoneNoneNone
IMS5SlightSlightNoneNoneNoneSlightSlightSlightNoneSlightNone
IMS6MildNoneNoneNoneNoneMildMildSlightNoneSlightNone
IMS7NoneNoneNoneNoneNoneMildNoneMildSlightNoneNone
IMS8SlightNoneNoneNoneNoneMildSlightSlightSlightSlightSlight
IMS9MildNoneNoneNoneNoneMildSlightSlightSlightNoneNone
IMS10NoneNoneNoneNoneNoneSlightSlightNoneNoneNoneNone
IMS11NoneNoneNoneNoneNoneSlightNoneNoneNoneNoneNone
Table 2
Information regarding IMS participants’ visual and visuo-perceptual abilities.
VisionReported best corrected acuityStrabismusEye movementsMid-level perception*
(modified t-test)2
IMS1Hypermetropy, astigmatismMild vision loss (7/10)SlightH: Absent; V: Reduced0.9
IMS2Hypermetropy, astigmatismNormal visionSlightH: Absent; V: Reduced−3.3
IMS3MyopiaNormal visionNoneH: Absent; V: Reduced−0.3
IMS4Hypermetropy, astigmatismMild vision loss (5/10)NoneH: Absent; V: Typical−2.9
IMS5Hypermetropy, astigmatismMild vision loss (8/10)SlightH: Very limited; V: Typical−0.3
IMS6Hypermetropy, astigmatismNormal visionSlightH: Typical; V: Typical0.1
IMS7Hypermetropy, astigmatismModerate vision loss of the left eye (2/10)NoneH: Typical; V: Typical0.5
IMS8NormalNormal visionNoneH: Typical; V: Typical0.5
IMS9NormalNormal visionSlightH: Absent; V: Absent−3.3
IMS10MyopiaMild vision loss (8/10)NoneH: Absent; V: Typical−0.3
IMS11Myopia, astigmatismMild vision loss: 6/10 right eye; 5/10 left eyeSlightH: Reduced; V: Typical−4.2
  1. * Leuven Perceptual Organization Screening Test, L-POST (Torfs et al., 2014). 2(Crawford and Howell, 1998).

Additional files

Supplementary file 1

Information regarding IMS participants’ demographic, neurological, psychiatric, medical and surgical/therapeutic history.

https://cdn.elifesciences.org/articles/54687/elife-54687-supp1-v2.docx
Supplementary file 2

Facial action units corresponding to the facial expression of the six basic emotions and their presence/absence in the repertoire of the IMS 8 and 10.

https://cdn.elifesciences.org/articles/54687/elife-54687-supp2-v2.docx
Transparent reporting form
https://cdn.elifesciences.org/articles/54687/elife-54687-transrepform-v2.docx

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Gilles Vannuscorps
  2. Michael Andres
  3. Alfonso Caramazza
(2020)
Efficient recognition of facial expressions does not require motor simulation
eLife 9:e54687.
https://doi.org/10.7554/eLife.54687