Quantifying dynamic facial expressions under naturalistic conditions

  1. Jayson Jeganathan  Is a corresponding author
  2. Megan Campbell
  3. Matthew Hyett
  4. Gordon Parker
  5. Michael Breakspear
  1. University of Newcastle Australia, Australia
  2. University of Western Australia, Australia
  3. University of New South Wales, Australia

Abstract

Facial affect is expressed dynamically - a giggle, grimace, or an agitated frown. However, the characterization of human affect has relied almost exclusively on static images. This approach cannot capture the nuances of human communication or support the naturalistic assessment of affective disorders. Using the latest in machine vision and systems modelling, we studied dynamic facial expressions of people viewing emotionally salient film clips. We found that the apparent complexity of dynamic facial expressions can be captured by a small number of simple spatiotemporal states - composites of distinct facial actions, each expressed with a unique spectral fingerprint. Sequential expression of these states is common across individuals viewing the same film stimuli but varies in those with the melancholic subtype of major depressive disorder. This approach provides a platform for translational research, capturing dynamic facial expressions under naturalistic conditions and enabling new quantitative tools for the study of affective disorders and related mental illnesses.

Data availability

The DISFA dataset is publically available at http://mohammadmahoor.com/disfa/, and can be accessed by application at http://mohammadmahoor.com/disfa-contact-form/. The melancholia dataset is not publically available due to ethical and privacy considerations for patients, and because the original ethics approval does not permit sharing this data.

The following previously published data sets were used

Article and author information

Author details

  1. Jayson Jeganathan

    School of Psychology, University of Newcastle Australia, Newcastle, Australia
    For correspondence
    jayson.jeganathan@gmail.com
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4175-918X
  2. Megan Campbell

    School of Psychology, University of Newcastle Australia, Newcastle, Australia
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4051-1529
  3. Matthew Hyett

    School of Psychological Sciences, University of Western Australia, Perth, Australia
    Competing interests
    The authors declare that no competing interests exist.
  4. Gordon Parker

    School of Psychiatry, University of New South Wales, Kensington, Australia
    Competing interests
    The authors declare that no competing interests exist.
  5. Michael Breakspear

    School of Psychology, University of Newcastle Australia, Newcastle, Australia
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4943-3969

Funding

Health Education and Training Institute Award in Psychiatry and Mental Health

  • Jayson Jeganathan

Rainbow Foundation

  • Jayson Jeganathan
  • Michael Breakspear

National Health and Medical Research Council (1118153,10371296,1095227)

  • Michael Breakspear

Australian Research Council (CE140100007)

  • Michael Breakspear

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: Participants provided informed consent for the study. Ethics approval was obtained from the University of New South Wales (HREC-08077) and the University of Newcastle (H-2020-0137). Figure 1a shows images of a person's face from the DISFA dataset. Consent to reproduce their image in publications was obtained by the original DISFA authors, and is detailed in the dataset agreement (http://mohammadmahoor.com/disfa-contact-form/) and the original paper (https://ieeexplore.ieee.org/document/6475933).

Copyright

© 2022, Jeganathan et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,558
    views
  • 274
    downloads
  • 8
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Jayson Jeganathan
  2. Megan Campbell
  3. Matthew Hyett
  4. Gordon Parker
  5. Michael Breakspear
(2022)
Quantifying dynamic facial expressions under naturalistic conditions
eLife 11:e79581.
https://doi.org/10.7554/eLife.79581

Share this article

https://doi.org/10.7554/eLife.79581

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Cesare V Parise, Marc O Ernst
    Research Article

    Audiovisual information reaches the brain via both sustained and transient input channels, representing signals’ intensity over time or changes thereof, respectively. To date, it is unclear to what extent transient and sustained input channels contribute to the combined percept obtained through multisensory integration. Based on the results of two novel psychophysical experiments, here we demonstrate the importance of the transient (instead of the sustained) channel for the integration of audiovisual signals. To account for the present results, we developed a biologically inspired, general-purpose model for multisensory integration, the multisensory correlation detectors, which combines correlated input from unimodal transient channels. Besides accounting for the results of our psychophysical experiments, this model could quantitatively replicate several recent findings in multisensory research, as tested against a large collection of published datasets. In particular, the model could simultaneously account for the perceived timing of audiovisual events, multisensory facilitation in detection tasks, causality judgments, and optimal integration. This study demonstrates that several phenomena in multisensory research that were previously considered unrelated, all stem from the integration of correlated input from unimodal transient channels.

    1. Computational and Systems Biology
    Franck Simon, Maria Colomba Comes ... Herve Isambert
    Tools and Resources

    Live-cell microscopy routinely provides massive amounts of time-lapse images of complex cellular systems under various physiological or therapeutic conditions. However, this wealth of data remains difficult to interpret in terms of causal effects. Here, we describe CausalXtract, a flexible computational pipeline that discovers causal and possibly time-lagged effects from morphodynamic features and cell–cell interactions in live-cell imaging data. CausalXtract methodology combines network-based and information-based frameworks, which is shown to discover causal effects overlooked by classical Granger and Schreiber causality approaches. We showcase the use of CausalXtract to uncover novel causal effects in a tumor-on-chip cellular ecosystem under therapeutically relevant conditions. In particular, we find that cancer-associated fibroblasts directly inhibit cancer cell apoptosis, independently from anticancer treatment. CausalXtract uncovers also multiple antagonistic effects at different time delays. Hence, CausalXtract provides a unique computational tool to interpret live-cell imaging data for a range of fundamental and translational research applications.