Introduction

Theories of pain communication highlight the diversity in pain manifestations, which occur through multiple channels: verbal reports, vocal complaints, change of postures, and facial expressions. From an evolutionary perspective, several manifestations of pain appear to be preserved across vertebrate phyla and reflect various functional roles to preserve the integrity of the organism (Sneddon 2019). Withdrawal behaviour allows the individual to move away from the noxious source, while facial expressions in social species convey information about the presence of a potential threat and an appeal for assistance (Hadjistavropoulos et al. 2011). Functionally distinct manifestations imply at least partly segregated neurophysiological processing (Sliwa et al. 2022). Previous functional magnetic resonance imaging (fMRI) studies investigating the neural correlates of acute pain have suggested that spontaneous or induced fluctuations in pain facial expression partly reflect changes in activity within the cortical targets of the spino-thalamo-cortical pathways. These fluctuations are, however, independent from changes in stimulus intensity and are inversely related to activity in prefrontal regions (Kunz et al. 2011; Vachon-Presseau et al. 2016; Kunz, Chen, and Rainville 2020). This suggests that pain facial expression may reflect the integration of activity across distributed brain networks processing ascending nociceptive signals, determining action policy, and gating efferent facial motor outputs.

The interest in developing pain neuromarkers has led researchers to use multivariate pattern analysis to investigate the distributed brain mechanisms underlying the experience of pain evoked by acute nociceptive stimuli. However, functional magnetic resonance imaging (fMRI) studies have revealed not one but several brain signatures of acute experimental pain that may reflect the diversity and complexity of pain-related function. The Neurological Pain Signature (NPS; Wager et al. 2013) was developed to predict changes in pain induced by variations in stimulus intensity and captured by subjective reports, reflecting primarily the cerebral contributions to acute nociceptive pain (Krishnan et al. 2016; Van Oudenhove et al. 2020; Wager et al. 2013). To account for spontaneous fluctuations in the perception of pain intensity, the Stimulus-Independent Intensity of Pain Signature (SIIPS-1) was trained on noxious thermal trials after removing the effects of the stimulus intensity and the NPS response (Woo et al. 2017). More recently, the affective dimension of pain has received more attention, resulting in a multivariate pattern predictive of negative affect ratings to thermal pain, referred here as the Thermal Pain Aversive Signature (TPAS; Čeko et al. 2022). Finally, a signature was elaborated to characterise the neuronal representations associated with the valuation of pain (PVP) in the context of a decision task involving a cost-benefit analysis of future pain against a monetary reward (Coll et al. 2022). Taken together, those signatures have contributed to improve our understanding of the neurobiological mechanisms of pain, as reflected in self-report or explicit decision making.

Facial expression has been used as a reliable behavioural measure of pain across different mammal species (Costa et al. 2014; Craig 1992; Evangelista et al. 2019; Langford et al. 2010; Sotocina et al. 2011), but few studies have investigated the brain mechanisms associated with the spontaneous non-verbal communication of pain in human (Kunz et al. 2011; Kunz, Chen, and Rainville 2020). As an automatic behavioral manifestation, pain facial expression might be an indicator of activity in nociceptive systems, perceptual and evaluative processes, or general negative affect. We assessed the association between pain facial expression and the available pain-relevant brain signatures and we applied multivariate analysis with machine learning techniques to develop a predictive brain activation model of the facial responses to pain.

Results and Discussion

The Facial Action Coding System (FACS; Ekman and Friesen 1978) was used to quantify the facial expression of pain in healthy participants while brain responses evoked by brief moderately painful heat stimulation were recorded using fMRI. For each trial, the intensity and the frequency of pain-related action units were scored and combined into a FACS composite score (Methods). The association with the NPS, the SIIPS-1, the PVP, and the TPAS was assessed across the whole brain using the correlation between the FACS scores and the dot product computed between each signature and the activation maps for each individual trial.

Pain facial expression was not significantly associated with NPS expression (pearson-r = .01; p = .86), TPAS expression (pearson-r = -.03; p = .55), or PVP expression (pearson-r = .08; p = .07). Low but significant positive correlation was found with the SIIPS-1 expression (pearson-r = .10; p = .03). These low values indicate that the available pain-relevant brain signatures show poor sensitivity to the facial expression of pain. This motivated the development of a new multivariate brain pattern to predict pain expression.

We used a multivariate approach at the voxel level across the whole brain to develop the FEPS (Methods). A LASSO principal component regression was applied to predict the FACS composite scores from the trial-by-trial fMRI contrast images. The FEPS was able to predict the FACS composite scores with a performance significantly above chance level (averaged cross-validated prediction across 10 folds: pearson-r = .49 ± .11; R2 = .17 ± .08; RMSE = 6.54 ± 0.99; p < .001 compared to a permuted distribution; Figure 1A-B). These results indicate that we were able to develop a multivariate brain pattern accounting for the facial responses related to pain.

FEPS: a brain signature of the facial expression of pain.

A) Relationship between the actual and the predicted FACS composite scores for each cross-validation fold. B) Distribution of the Pearson’s r scores across the cross-validation folds. C) Predictive weight map of pain expression thresholded at p-uncorrected < .05 using bootstrap tests performed with 5000 samples (with replacement). The thresholded map is shown for visualization and interpretation purposes only, although the prediction was made using voxel weights across the whole brain. MNI coordinates of the clusters with the related z-score can be found in Table S1. The colour bar represents the z-scored regression weights reflecting the positive and negative association with the magnitude of the FACS composite score of pain expression.

The distributed pattern of activity predicting pain expression was projected back on the brain to examine the spatial distribution of higher weights contributing to the prediction (Figure 1C). Positive weight clusters were found in the primary motor cortex (M1; bilateral), the frontal pole, the right posterior parietal cortex, the aMCC and the dorsal part of the parietal operculum, adjacent to the secondary somatosensory cortex (S2). These regions are respectively associated with motor control, reward and affective value, attentional processes, and nociceptive pain processing (Table S1)(Price 2000; Rushworth et al. 2011; Shackman et al. 2011). Notably, separate analysis indicated that M1 activity was not sufficient to achieve significant prediction of facial responses (Figure 1S). Regions showing negative weights included the dorso-lateral PFC (dlPFC), the ventro-lateral PFC (vlPFC), the mid cingulate cortex (MCC), the subgenual ACC, the ventral part of the parietal operculum, the precuneus, and the vmPFC (Table S1). Negative weights imply that increased activity in those regions is associated with decreased facial response, consistent with a role in the inhibition of pain expression (Kunz et al. 2011). These results are consistent with the distributed nature of brain activity associated with pain facial expression reflecting in part the ascending nociceptive response and the ensuing affective processes, as well as top-down socio-affective regulation underlying the implementation of learned display rules (Karmann et al. 2016).

Several regions identified in the FEPS have also been reported in other pain-related brain signatures. Regions predictive of pain facial expression and pain intensity (NPS and SIIPS-1) include S2, the vmPFC, and the precuneus. The vlPFC is a region that does not receive direct spino-thalamo-cortical nociceptive afferents, and was reported both in the FEPS and in the SIIPS-1. Overlap between the FEPS and the PVP (Pain Value Pattern) include regions associated with reward and affect (i.e. OFC). Finally, the primary motor cortex, and S2 were also reported as contributing regions in the TPAS. The weak positive association between both the SIIPS-1 and the facial expression in addition with the spatial comparison showing some common regions across these pain-relevant signatures suggest possible shared features with the FEPS.

We computed the cosine similarity between the FEPS and other pain-related brain signatures to further examine the shared and specific representations between those predictive patterns (Methods). Cosine similarity ranging from -.03 to .09 was found between the FEPS and the other pain-related brain signatures reflecting the overall low similarity between the signatures at the whole-brain level (Figure 2A). Highest similarity values with the FEPS were found for the SIIPS-1 and the PVP, consistent with the notion that the facial expression of pain may reflect at least partly changes in brain responses associated with spontaneous fluctuations in pain experience captured by pain ratings and with the assessment of pain value. Similarity with the FEPS was further assessed across different cortical networks. The significant positive similarity with the SIIPS-1 at the frontoparietal level may suggest common mechanisms in prediction and emotional regulation of the pain experience that would be reflected in both facial expression and subjective reports (Pan et al. 2018). Recruitment of the frontoparietal network may also be involved in the conscious representation of the pain context, making nociceptive information available for integration into decision-making processes (Coll et al. 2022; Bastuji et al. 2016; Cul, Baillet, and Dehaene 2007; Zheng et al. 2020). Finally, the convergent similarities in the limbic network with the PVP, and the TPAS, is consistent with a key role of affective pain processing influencing facial expression, perceived pain value, and reported unpleasantness or aversiveness of pain (Čeko et al. 2022; Garcia-Larrea and Peyron 2013; Roy et al. 2009).

Spatial similarity between the FEPS and other pain-related signatures.

A) Pattern similarity between the FEPS and other pain-related brain signatures using the weights of the full brain patterns. Pattern similarities were computed at the voxel level using the cosine similarity; a value of 1 reflects proportional patterns; a value of 0 reflects orthogonal patterns; a value of −1 reflects patterns of opposite direction. The left panel shows the similarity matrix, and the right panel shows only the significant similarities between the pain-related signatures (* p < .05; ** p < .01; *** p < .001). B) Deconstructing the pattern similarity with regards to seven cortical networks as defined in the Yeo atlas24: Visual Network (VN); Somatomotor Network (SMN); Dorsal Attention Network (DAN); Ventral Attention Network (VAN); Limbic Network (LN); Frontoparietal Network (FPN); Default Mode Network (DMN). Null distributions computed using permutation tests are shown, and the actual similarity values are represented by the vertical bar. Significant similarity values were found in the FPN (similarity = .24; p = .001), and the DMN (similarity = .18; p = .005) for the SIIPS-1, in the LN (similarity = .23; p = .003), and DMN (similarity = .11; p = .05) for the PVP, and in the LN (similarity = .11; p = .04) for the TPAS.

In research and clinical practice, verbal reports of perceived pain intensity are considered to be the gold standard for measuring pain. Other measures that are often weakly correlated with those subjective reports, like facial expressions of pain, are often considered a less valid metric of the experience of pain even though they provide important complementary information on pain-related processes (Hadjistavropoulos et al. 2011). The Facial Expression of Pain Signature (FEPS) was able to predict the magnitude of the facial expression of pain. Regions that contribute to the prediction include motor and pain-related areas associated with both sensory and affective processing of pain. Although it shares, to some extent, similar representations with other pain-relevant signatures within various cerebral networks, the FEPS is distinctive from these other signatures. Results of this study provide unique evidence of the complementary information provided by facial expression on pain-related brain processes. Future studies must provide a more comprehensive account of diverse pain manifestations and their related function to better capture the pain phenomenon in its entirety.

Methods

Participants

Secondary analyses of brain imaging data acquired in 34 participants were performed in this study (18 women, 16 men; age mean±SD = 23.4±2.5 years)(Kunz et al. 2011). No participants reported having a history of neurological or psychiatric disease, nor chronic pain. Participants reported not using alcohol nor analgesic for at least 24h before the experimental session. All participants provided written informed consent and received monetary compensation for their participation. The information about the video recording of the face was indicated in the consent form but this was not emphasized and it was not mentioned at the time of data acquisition. All procedures were approved by the ethics committee of the Centre de recherche de l’institut universitaire de gériatrie de Montréal.

Study design

Pre-experimental session

Participants were submitted to a pre-experimental session to assess the range of thermal pain sensitivity using a magnitude estimation procedure. All participants included in this study had a normal range of thermal pain sensitivity. The degree of facial expressiveness was also evaluated.

Thermal stimuli

Thermal stimulations were induced using a Medoc TSA-2001 thermosensory stimulator with a MRI-compatible 3 × 3 cm2 contact probe (Medoc), positioned at the level of the lower left leg. Thermal stimuli lasted 9 seconds (2s ramp up, 5s plateau at targeted temperature, 2s ramp down) and were followed by an interstimulus interval of 18-25 seconds. The experiment was programmed using E-prime software (Psychology Software Tools, Pittsburgh, PA). A baseline temperature of 38°C was applied between stimuli for all participants. The target temperatures were determined individually before the MRI scans to induce a warm non-painful sensation in the control condition, and a moderate to strong self-reported pain intensity (targeting 75-80/100 on the pain scale; temperature (mean±SD) = 47.8±0.90 °C). Participants were not aware that the warm and painful temperatures remained constant across trials. The order of the control condition and experimental condition was pseudorandomized. There were 8 trials for each experimental condition per run, for a total of 16 trials per condition and a total of 544 pain trials across all participants (34 participants x 16 pain trials). After each stimulus, participants rated the warm or pain sensation by moving a cursor with an MRI-compatible response key on a computerized visual analog scale. Because we were interested in the facial expression of pain, only the painful trials were considered in the analyses.

Facial expression

The facial expression of the participants was recorded using an MRI-compatible camera (MRC Systems) mounted onto the head coil of the MRI scanner. To be able to quantify facial expressions that occurred during the stimulation, a signal was sent from the stimulator to the sound card to automatically mark the onset of each stimulus on the video recording. Two certified FACS-coders evaluated the video recordings to rate the frequency, and intensity (on a 5-point scale) of pain-related action units (AUs; AU4, AU6-7, AU9-10, and AU43) for each trial (see Kunz et al., 2011 for details about the AUs selection). From these scores, a composite score (FACS composite score) was computed by taking the product between the mean AU frequency and mean AU intensity values, reflecting pain expression for each painful trial. The FACS composite scores were then used as the predictive variable. To examine whether the facial expression could be a confounding with the pain reports, we predicted the trial-by-trial FACS composited scores from the pain ratings using a mixed effect model with the participants as a random effect. Pain ratings were only weakly associated to facial responses (R2GLMM = .01; β = 0.80±0.25; t(518.07) = 3.21; p < .001).

Anatomical and functional acquisition

MRI images were acquired using a 12-channel head coil 3T Siemens TRIO scanner. T1-weighted structural data were collected using a MP-RAGE sequence (TR = 2300ms, TE = 2.91ms, flip angle = 9°, FOV = 256mm, matrix size = 240×256, voxel size = 1×1×1.2mm, 170 whole-brain volumes). Functional data were acquired using an EPI T2*-weighted sequence (TR = 3000ms, TE = 30ms, flip angle = 90°, FOV = 220×220mm2, voxel size = 3.44×3.44×3.40mm, 40 interleaved axial slices).

Preprocessing of fMRI data

The fMRI data were preprocessed using SPM8 (Statistical Parametric Mapping, Version 8; Wellcome Department of Imaging Neuroscience, London, United Kingdom) on MATLAB 7.4 (The MathWorks Inc, Natick, Massachusetts, United States). Preprocessing steps included a slice-timing correction, a correction for head movements, and co-registration between functional and anatomical images for each participant. Functional images were normalized into the MNI space. A spatial smoothing procedure (6mm FWHM gaussian kernel) and a high pass filter (128s) were also applied.

BOLD signal was modelled using a canonical hemodynamic response function. First level analyses were computed using the GLM to obtain a pain activation map for each trial using SPM8. Additionally, the six movements parameters and averaged signals from the white matter and the cerebrospinal fluid were included as nuisance regressors. Eleven trials were discarded due to excessive movements and a total of 533 activation maps of individual painful trials were used to develop the Facial Expression of Pain Signature (FEPS).

Analyses

Association between the facial expression of pain and pain-related brain signatures

The dot product between the Neurologic Pain Signature (NPS; Wager et al. 2013), the Stimulus Intensity Independent Pain Signature-1 (SIIPS-1; Woo et al. 2017), the Predictive Value of Pain (PVP; Coll et al. 2022), the Thermal Pain Aversive Signature (TPAS; Čeko et al. 2022), and the trial-by-trial activation maps was computed to derive a measure of similarity (pattern expression) between the signatures and the maps. These scalar values were then correlated with the FACS composite scores to assess the association between the facial expression of pain and the NPS, the SIIPS-1, the PVP, and the TPAS. Pearson-r correlation coefficients, and p-values are reported.

Multivariate pattern analysis

We applied a Least Absolute Shrinkage and Selection Operator Principal Component Regression (LASSO-PCR) with a 10-fold cross-validation procedure for brain decoding (Wager et al. 2011) using scikit-learn implementation (Pedregosa et al. 2011). The analyses were performed using the trial-by-trial contrast images as input to the model and the participants as a grouping factor (i.e. data from a given participant could only be either in the training set or the testing set for a given cross-validation fold). This procedure was used to predict FACS composite scores from contrast images. The performance of each regression model was evaluated using Pearson’s correlation coefficient (pearson-r), coefficient of determination (R2), and root mean square error (RMSE). The scikit-learn’s implementation of the coefficient of determination was used, allowing negative values if the model showed worse performance compared to an unfitted model (i.e. a horizontal line). The averaged performance metrics across folds are reported for each analysis. To test if the models performed significantly above chance, permutation tests were computed using 5000 iterations, leading to a p-value corresponding to the probability that the particular metric would be obtained by chance. A bootstrap resampling procedure was also performed to evaluate the stability of the voxel contribution to the model performance. This procedure consists of randomly sampling 5000 times the dataset with replacement. The resulting samples contain the same number of observations as the dataset. The LASSO-PCR procedure as described above is then applied on each sample. Z-scores and p-values are calculated on the overall regression coefficients.

This analysis procedure was first applied at the whole brain level. It was repeated using a spatial mask including only the precentral region bilaterally. This mask was derived from the Oxford-Harvard Cortical Atlas (Caviness et al. 1996). This secondary analysis was conducted to verify if the pattern of activity within the primary motor cortex (M1) might be sufficient to predict facial expression. This model was not able to predict the FACS composite scores of pain expression and is not discussed further (Figure S1A-B). To ensure that the whole brain model prediction of the facial responses was not confounded with the pain ratings, we predicted the facial composite scores from the FEPS pattern expression scores, and the trial-by-trial pain ratings using a mixed effect model including the participants as a random effect. Variance in the facial composite scores was significantly explained by the FEPS pattern expression scores (β = 5.18±0.25; t(237.2) = 20.62; p < .001), but not by the pain ratings (β =0.24±0.20; t(484.9) = 1.21; p = .23).

Spatial similarity across the FEPS and pain-related brain signatures

Similarity between the FEPS and other pain-related brain signatures was assessed using the cosine similarity computed across all voxels . This metric was computed on the NPS, the SIIPS-1, the PVP, and the TPAS. To further explore the cortical similarities between the FEPS and other pain-related brain signatures, we also computed the cosine similarity across different cortical networks (i.e. visual, somatomotor, dorsal attention, ventral attention, limbic, frontoparietal, and default mode)(Yeo et al. 2011). Permutation tests (n = 1000) using generative null models preserving the spatial autocorrelation of a target brain map were used to assess the significance of the similarity between the brain signatures (Burt et al. 2020). P-values were calculated from the generated null distributions as the fraction of permuted samples where the absolute cosine similarity was equal or larger than the cosine similarity obtained on the original signatures.

Supplementary Material

Predictive performance of the M1-based model.

A) Relationship between the actual and the predicted FACS composite scores for each cross-validation fold (k=10) using only the activity from the primary motor cortex as defined by the Oxford-Harvard Cortical Atlas (Caviness et al. 1996). B) Distribution of the Pearson-r scores across the 10 cross-validation folds (Pearson-r = .11 ± .08; R2 = -.29 ± .33; RMSE = 8.02 ± 0.96; p = .58).

Peak regions contributing to the prediction of the facial expression scores.

Acknowledgements

We wish to thank André Cyr and Carollyn Hurst from the Unité de Neuroimagerie Fonctionnelle for their help with the data acquisition, and Andréanne Proulx, François Paugman and Pierre Bellec for their insights on the analyses as part of BrainHack School. We also want to acknowledge Compute Canada for the computational resources allocated to conduct the analyses. M.-E.P. was supported by scholarships from the Fonds de Recherche du Québec-Nature et Technologies and from the Institut de Valorisation des Données. This study was funded by a discovery grant from the Natural Sciences and Engineering Research (RGPIN-2018-06799) to P.R. and the NIH (R01 DA035484) to T.D.W.

Credit statement

Marie-Eve Picard: Conceptualization, Software, Formal analysis, Writing-Original draft, Visualization. Miriam Kunz: Conceptualization, Methodology and data acquisition, Data curation. Jen-I Chen: Methodology and data acquisition, Data curation, Formal analysis. Michel-Pierre Coll: Software, Validation, Visualization. Étienne Vachon-Presseau: Conceptualization, Methodology, Validation. Tor D. Wager: Conceptualization, Validation, Funding acquisition. Pierre Rainville: Conceptualization, Methodology and data acquisition, Writing-Original draft, Funding acquisition. All authors reviewed and approved the final version.

Data Availability

Participants did not provide informed consent for the sharing of individual data but unthresholded group maps are available upon request. Unthresholded patterns are available on Neurovault (https://neurovault.org/collections/13924/).

Code Availability

All custom Python scripts used to produce the analyses are available at https://github.com/me-pic/picard_feps_2022.