Quantifying dynamic facial expressions under naturalistic conditions

  1. Jayson Jeganathan  Is a corresponding author
  2. Megan Campbell
  3. Matthew Hyett
  4. Gordon Parker
  5. Michael Breakspear
  1. University of Newcastle Australia, Australia
  2. University of Western Australia, Australia
  3. University of New South Wales, Australia

Abstract

Facial affect is expressed dynamically - a giggle, grimace, or an agitated frown. However, the characterization of human affect has relied almost exclusively on static images. This approach cannot capture the nuances of human communication or support the naturalistic assessment of affective disorders. Using the latest in machine vision and systems modelling, we studied dynamic facial expressions of people viewing emotionally salient film clips. We found that the apparent complexity of dynamic facial expressions can be captured by a small number of simple spatiotemporal states - composites of distinct facial actions, each expressed with a unique spectral fingerprint. Sequential expression of these states is common across individuals viewing the same film stimuli but varies in those with the melancholic subtype of major depressive disorder. This approach provides a platform for translational research, capturing dynamic facial expressions under naturalistic conditions and enabling new quantitative tools for the study of affective disorders and related mental illnesses.

Data availability

The DISFA dataset is publically available at http://mohammadmahoor.com/disfa/, and can be accessed by application at http://mohammadmahoor.com/disfa-contact-form/. The melancholia dataset is not publically available due to ethical and privacy considerations for patients, and because the original ethics approval does not permit sharing this data.

The following previously published data sets were used

Article and author information

Author details

  1. Jayson Jeganathan

    School of Psychology, University of Newcastle Australia, Newcastle, Australia
    For correspondence
    jayson.jeganathan@gmail.com
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4175-918X
  2. Megan Campbell

    School of Psychology, University of Newcastle Australia, Newcastle, Australia
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4051-1529
  3. Matthew Hyett

    School of Psychological Sciences, University of Western Australia, Perth, Australia
    Competing interests
    The authors declare that no competing interests exist.
  4. Gordon Parker

    School of Psychiatry, University of New South Wales, Kensington, Australia
    Competing interests
    The authors declare that no competing interests exist.
  5. Michael Breakspear

    School of Psychology, University of Newcastle Australia, Newcastle, Australia
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4943-3969

Funding

Health Education and Training Institute Award in Psychiatry and Mental Health

  • Jayson Jeganathan

Rainbow Foundation

  • Jayson Jeganathan
  • Michael Breakspear

National Health and Medical Research Council (1118153,10371296,1095227)

  • Michael Breakspear

Australian Research Council (CE140100007)

  • Michael Breakspear

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Alexander Shackman, University of Maryland, United States

Ethics

Human subjects: Participants provided informed consent for the study. Ethics approval was obtained from the University of New South Wales (HREC-08077) and the University of Newcastle (H-2020-0137). Figure 1a shows images of a person's face from the DISFA dataset. Consent to reproduce their image in publications was obtained by the original DISFA authors, and is detailed in the dataset agreement (http://mohammadmahoor.com/disfa-contact-form/) and the original paper (https://ieeexplore.ieee.org/document/6475933).

Version history

  1. Received: April 19, 2022
  2. Preprint posted: May 10, 2022 (view preprint)
  3. Accepted: August 24, 2022
  4. Accepted Manuscript published: August 31, 2022 (version 1)
  5. Version of Record published: September 2, 2022 (version 2)

Copyright

© 2022, Jeganathan et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,335
    Page views
  • 243
    Downloads
  • 4
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Jayson Jeganathan
  2. Megan Campbell
  3. Matthew Hyett
  4. Gordon Parker
  5. Michael Breakspear
(2022)
Quantifying dynamic facial expressions under naturalistic conditions
eLife 11:e79581.
https://doi.org/10.7554/eLife.79581

Share this article

https://doi.org/10.7554/eLife.79581

Further reading

    1. Cancer Biology
    2. Computational and Systems Biology
    Bingrui Li, Fernanda G Kugeratski, Raghu Kalluri
    Research Article

    Non-invasive early cancer diagnosis remains challenging due to the low sensitivity and specificity of current diagnostic approaches. Exosomes are membrane-bound nanovesicles secreted by all cells that contain DNA, RNA, and proteins that are representative of the parent cells. This property, along with the abundance of exosomes in biological fluids makes them compelling candidates as biomarkers. However, a rapid and flexible exosome-based diagnostic method to distinguish human cancers across cancer types in diverse biological fluids is yet to be defined. Here, we describe a novel machine learning-based computational method to distinguish cancers using a panel of proteins associated with exosomes. Employing datasets of exosome proteins from human cell lines, tissue, plasma, serum, and urine samples from a variety of cancers, we identify Clathrin Heavy Chain (CLTC), Ezrin, (EZR), Talin-1 (TLN1), Adenylyl cyclase-associated protein 1 (CAP1), and Moesin (MSN) as highly abundant universal biomarkers for exosomes and define three panels of pan-cancer exosome proteins that distinguish cancer exosomes from other exosomes and aid in classifying cancer subtypes employing random forest models. All the models using proteins from plasma, serum, or urine-derived exosomes yield AUROC scores higher than 0.91 and demonstrate superior performance compared to Support Vector Machine, K Nearest Neighbor Classifier and Gaussian Naive Bayes. This study provides a reliable protein biomarker signature associated with cancer exosomes with scalable machine learning capability for a sensitive and specific non-invasive method of cancer diagnosis.

    1. Computational and Systems Biology
    2. Immunology and Inflammation
    Alain Pulfer, Diego Ulisse Pizzagalli ... Santiago Fernandez Gonzalez
    Tools and Resources

    Intravital microscopy has revolutionized live-cell imaging by allowing the study of spatial–temporal cell dynamics in living animals. However, the complexity of the data generated by this technology has limited the development of effective computational tools to identify and quantify cell processes. Amongst them, apoptosis is a crucial form of regulated cell death involved in tissue homeostasis and host defense. Live-cell imaging enabled the study of apoptosis at the cellular level, enhancing our understanding of its spatial–temporal regulation. However, at present, no computational method can deliver robust detection of apoptosis in microscopy timelapses. To overcome this limitation, we developed ADeS, a deep learning-based apoptosis detection system that employs the principle of activity recognition. We trained ADeS on extensive datasets containing more than 10,000 apoptotic instances collected both in vitro and in vivo, achieving a classification accuracy above 98% and outperforming state-of-the-art solutions. ADeS is the first method capable of detecting the location and duration of multiple apoptotic events in full microscopy timelapses, surpassing human performance in the same task. We demonstrated the effectiveness and robustness of ADeS across various imaging modalities, cell types, and staining techniques. Finally, we employed ADeS to quantify cell survival in vitro and tissue damage in mice, demonstrating its potential application in toxicity assays, treatment evaluation, and inflammatory dynamics. Our findings suggest that ADeS is a valuable tool for the accurate detection and quantification of apoptosis in live-cell imaging and, in particular, intravital microscopy data, providing insights into the complex spatial–temporal regulation of this process.