The representation of facial emotion expands from sensory to prefrontal cortex with development

  1. Xiaoxu Fan
  2. Abhishek Tripathi
  3. Kelly Bijanki  Is a corresponding author
  1. Baylor College of Medicine, United States
  2. Rice University, United States
7 figures, 3 tables and 1 additional file

Figures

Task design and analysis methods.

(A) Movie structure. A 6.5 min short film was created by editing fragments from Pippi on the Run into a coherent narrative. The movie consisted of 13 interleaved blocks of videos accompanied by speech or music. (B) Data analysis schematic. Standard analysis pipeline for extracting emotion features from the movie and constructing an encoding model to predict intracranial EEG (iEEG) responses while participants watch the short film.

© 1970, Beta Film GmbH. All rights reserved. Screenshots in panel A are taken from 'Pippi on the Run' (1970). These are not covered by the CC-BY 4.0 license.

Prediction performance of encoding models in dorsolateral prefrontal cortex (DLPFC).

(A) Spatial distribution of electrodes in DLPFC. Electrodes in all participants from each group are projected onto the Montreal Neurological Institute (MNI) space and shown on the average brain. Red shaded areas indicate middle frontal cortex provided by the FreeSurfer Desikan-Killiany atlas (Desikan et al., 2006). Electrodes outside the DLPFC are not shown. (B) Averaged prediction accuracy across participants for speech condition. The performance of the encoding model is measured as Pearson correlation coefficient (r) between measured and predicted brain activities. (C) Averaged prediction accuracy across participants for music condition. (D) Prediction accuracy difference between speech condition and music condition for each group. Error bars are standard error of the mean. *p<0.05. (Nchildhood=8, Npost-childhood=13).

Figure 2—source data 1

Prediction performance of encoding models in dorsolateral prefrontal cortex (DLPFC).

https://cdn.elifesciences.org/articles/107636/elife-107636-fig2-data1-v1.xlsx
Prediction performance of encoding models in posterior superior temporal cortex (pSTC).

(A) The electrode distribution in the native brain space for the four children. Electrodes in pSTC are green, and electrodes in dorsolateral prefrontal cortex (DLPFC) are yellow. (B) Prediction accuracy of encoding models in children. Bars indicate mean prediction accuracy across participants and lines indicate individual data. (C) Prediction accuracy of encoding models for S19 in both DLPFC and pSTC. (D) Spatial distribution of recording contacts in post-childhood participants’ pSTC. The pSTC electrodes identified in individual space are projected onto Montreal Neurological Institute (MNI) space and shown on the average brain. Contacts other than pSTC are not shown. Blue shaded areas indicate superior temporal cortex provided by the FreeSurfer Desikan-Killiany atlas (Desikan et al., 2006). (E) Averaged prediction accuracy across post-childhood participants (N=25). Error bars are standard error of the mean. **p<0.01. *p<0.05.

Figure 3—source data 1

Prediction performance of encoding models in posterior superior temporal cortex (pSTC).

https://cdn.elifesciences.org/articles/107636/elife-107636-fig3-data1-v1.xlsx
Correlation between encoding weights and age.

(A) Left: Correlation between averaged encoding weights of five complex emotions and age. Right: Correlation between averaged encoding weights of six basic emotions and age. (B) Pearson correlation coefficient between encoding weights of 48 facial expression features and age. The results are ranked from largest to smallest. Significant correlations noted with * (p<0.05, uncorrected) or ** (p<0.01, uncorrected). (C) Correlation between encoding weights of embarrassment, pride, guilt, interest and age (N=12).

Author response image 1
Time courses of Hume AI extracted facial expression features for the first block of music condition.

Only top 5 facial expressions were shown here to due to space limitation.

© 1970, Beta Film GmbH. All rights reserved. Screenshots in panel A are taken from 'Pippi on the Run' (1970). These are not covered by the CC-BY 4.0 license.

Author response image 2
Time courses of the amusement.

(A) and (B) Amusement conveyed by face or music in a 30-s music block. Facial emotion features are extracted by Hume AI. For emotion from music, we approximated the amusement time course using a weighted combination of low-level acoustic features (RMS energy, spectral centroid, MFCCs), which capture intensity, brightness, and timbre cues linked to amusement. Notice that music continues when there are no faces presented. (C) and (D) Amusement conveyed by face or voice in a 30-s speech block. From 0 to 5 s, a girl is introducing her friend to a stranger. The camera focuses on the friend, who appears nervous, while the girl’s voice sounds cheerful. This mismatch explains why the shapes of the two time series differ at the beginning. Such situations occur frequently in naturalistic movies

Author response image 3
Difference of Granger causality indices (face – nonface) in alpha/beta and gamma band for both directions.

We identified a series of face onset in the movie that paticipant watched. Each trial was defined as -0.1 to 1.5 s relative to the onset. For the non-face control trials, we used houses, animals and scenes. Granger causality was calculated for 0-0.5 s, 0.5-1 s and 1-1.5 s time window. For the post-childhood group, GC indices were averaged across participants. Error bar is sem.

Tables

Key resources table
Reagent type (species) or resourceDesignationSource or referenceIdentifiersAdditional information
Software, algorithmMNE-PythonMNE-PythonRRID:SCR_005972
Software, algorithmPythonPythonRRID:SCR_008394
Software, algorithmFreeSurferFreeSurferRRID:SCR_001847
Software, algorithmHume AIHume AI
Table 1
Demographic information of childhood group.
IDSexAgeNumber of recording contacts in left pSTCNumber of recording contacts in right pSTCNumber of recording contacts in left DLPFCNumber of recording contacts in right DLPFC
s02F90090
s10F80080
s19F86080
s32F60050
s33F90400
s37M50040
s39F57000
s41M70040
s49F104000
s50F90060
s63M50090
Table 2
Demographic information of post-childhood group.
IDSexAgeNumber of recording contacts in left pSTCNumber of recording contacts in right pSTCNumber of recording contacts in left DLPFCNumber of recording contacts in right DLPFC
s1M550400
s3F336060
s5F338080
s6F439000
s12M3700120
s14F184080
s16M178090
s17M280800
s18F1560110
s20F259000
s22M214000
s24F4711000
s25M140060
s26F4811000
s27M1511000
s28M210600
s31F130070
s34F515000
s38F140040
s40M496000
s43M190408
s45M190040
s48F185000
s51M466090
s54F314000
s55F236000
s57F360070
s58F166000
s59F307000
s60M428000
s61F168000

Additional files

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Xiaoxu Fan
  2. Abhishek Tripathi
  3. Kelly Bijanki
(2026)
The representation of facial emotion expands from sensory to prefrontal cortex with development
eLife 14:RP107636.
https://doi.org/10.7554/eLife.107636.3