Social-affective features drive human representations of observed actions

  1. Diana C Dima  Is a corresponding author
  2. Tyler M Tomita
  3. Christopher J Honey
  4. Leyla Isik
  1. Johns Hopkins University, United States

Abstract

Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography (EEG) experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate and late stages respectively during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions, and are extracted at the final stage of a temporal gradient in the brain.

Data availability

Behavioral and EEG data and results have been archived as an Open Science Framework repository (https://osf.io/hrmxn/). Analysis code is available on GitHub (https://github.com/dianadima/mot_action).

The following data sets were generated

Article and author information

Author details

  1. Diana C Dima

    Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
    For correspondence
    ddima@jhu.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9612-5574
  2. Tyler M Tomita

    Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Christopher J Honey

    Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-0745-5089
  4. Leyla Isik

    Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
    Competing interests
    The authors declare that no competing interests exist.

Funding

National Science Foundation (CCF-1231216)

  • Leyla Isik

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Chris I Baker, National Institute of Mental Health, National Institutes of Health, United States

Ethics

Human subjects: All procedures for data collection were approved by the Johns Hopkins University Institutional Review Board, with protocol numbers HIRB00009730 for the behavioral experiments and HIRB00009835 for the EEG experiment. Informed consent was obtained from all participants.

Version history

  1. Preprint posted: October 26, 2021 (view preprint)
  2. Received: October 26, 2021
  3. Accepted: May 24, 2022
  4. Accepted Manuscript published: May 24, 2022 (version 1)
  5. Version of Record published: June 1, 2022 (version 2)

Copyright

© 2022, Dima et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,610
    views
  • 295
    downloads
  • 23
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Diana C Dima
  2. Tyler M Tomita
  3. Christopher J Honey
  4. Leyla Isik
(2022)
Social-affective features drive human representations of observed actions
eLife 11:e75027.
https://doi.org/10.7554/eLife.75027

Share this article

https://doi.org/10.7554/eLife.75027

Further reading

    1. Neuroscience
    Ivan Tomić, Paul M Bays
    Research Article

    Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or ‘iconic’ memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store.

    1. Neuroscience
    Emilio Salinas, Bashirul I Sheikh
    Insight

    Our ability to recall details from a remembered image depends on a single mechanism that is engaged from the very moment the image disappears from view.