DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels

  1. James P Bohnslav
  2. Nivanthika K Wimalasena
  3. Kelsey J Clausing
  4. Yu Y Dai
  5. David A Yarmolinsky
  6. Tomás Cruz
  7. Adam D Kashlan
  8. M Eugenia Chiappe
  9. Lauren L Orefice
  10. Clifford J Woolf
  11. Christopher D Harvey  Is a corresponding author
  1. Harvard Medical School, United States
  2. Boston Children's Hospital, United States
  3. Massachusetts General Hospital, United States
  4. Champalimaud Center for the Unknown, Portugal

Abstract

Videos of animal behavior are used to quantify researcher-defined behaviors-of-interest to study neural function, gene mutations, and pharmacological therapies. Behaviors-of-interest are often scored manually, which is time-consuming, limited to few behaviors, and variable across researchers. We created DeepEthogram: software that uses supervised machine learning to convert raw video pixels into an ethogram, the behaviors-of-interest present in each video frame. DeepEthogram is designed to be general-purpose and applicable across species, behaviors, and video-recording hardware. It uses convolutional neural networks to compute motion, extract features from motion and images, and classify features into behaviors. Behaviors are classified with above 90% accuracy on single frames in videos of mice and flies, matching expert-level human performance. DeepEthogram accurately predicts rare behaviors, requires little training data, and generalizes across subjects. A graphical interface allows beginning-to-end analysis without end-user programming. DeepEthogram's rapid, automatic, and reproducible labeling of researcher-defined behaviors-of-interest may accelerate and enhance supervised behavior analysis.

Data availability

Code is posted publicly on Github and linked in the paper. Video datasets and human annotations are publicly available and linked in the paper.

The following previously published data sets were used

Article and author information

Author details

  1. James P Bohnslav

    Neurobiology, Harvard Medical School, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  2. Nivanthika K Wimalasena

    F.M. Kirby Neurobiology Center, Boston Children's Hospital, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Kelsey J Clausing

    Molecular Biology, Massachusetts General Hospital, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Yu Y Dai

    Molecular Biology, Massachusetts General Hospital, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  5. David A Yarmolinsky

    F.M. Kirby Neurobiology Center, Boston Children's Hospital, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  6. Tomás Cruz

    Champalimaud Neuroscience Programme, Champalimaud Center for the Unknown, Lisbon, Portugal
    Competing interests
    The authors declare that no competing interests exist.
  7. Adam D Kashlan

    F.M. Kirby Neurobiology Center, Boston Children's Hospital, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  8. M Eugenia Chiappe

    Champalimaud Neuroscience Porgramme, Champalimaud Center for the Unknown, Lisbon, Portugal
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1761-0457
  9. Lauren L Orefice

    Molecular Biology, Massachusetts General Hospital, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  10. Clifford J Woolf

    Department of Neurobiology, Harvard Medical School, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  11. Christopher D Harvey

    Neurobiology, Harvard Medical School, Boston, United States
    For correspondence
    harvey@hms.harvard.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9850-2268

Funding

National Institutes of Health (R01MH107620)

  • Christopher D Harvey

National Science Foundation (GRFP)

  • Nivanthika K Wimalasena

Fundacao para a Ciencia ea Tecnologia (PD/BD/105947/2014)

  • Tomás Cruz

Harvard Medical School Dean's Innovation Award

  • Christopher D Harvey

Harvard Medical School Goldenson Research Award

  • Christopher D Harvey

National Institutes of Health (DP1 MH125776)

  • Christopher D Harvey

National Institutes of Health (R01NS089521)

  • Christopher D Harvey

National Institutes of Health (R01NS108410)

  • Christopher D Harvey

National Institutes of Health (F31NS108450)

  • James P Bohnslav

National Institutes of Health (R35NS105076)

  • Clifford J Woolf

National Institutes of Health (R01AT011447)

  • Clifford J Woolf

National Institutes of Health (R00NS101057)

  • Lauren L Orefice

National Institutes of Health (K99DE028360)

  • David A Yarmolinsky

European Research Council (ERC-Stg-759782)

  • M Eugenia Chiappe

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Animal experimentation: All experimental procedures were approved by the Institutional Animal Care and Use Committees at Boston Children's Hospital (protocol numbers 17-06-3494R and 19-01-3809R) or Massachusetts General Hospital (protocol number 2018N000219) and were performed in compliance with the Guide for the Care and Use of Laboratory Animals.

Copyright

© 2021, Bohnslav et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 14,358
    views
  • 1,356
    downloads
  • 123
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. James P Bohnslav
  2. Nivanthika K Wimalasena
  3. Kelsey J Clausing
  4. Yu Y Dai
  5. David A Yarmolinsky
  6. Tomás Cruz
  7. Adam D Kashlan
  8. M Eugenia Chiappe
  9. Lauren L Orefice
  10. Clifford J Woolf
  11. Christopher D Harvey
(2021)
DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels
eLife 10:e63377.
https://doi.org/10.7554/eLife.63377

Share this article

https://doi.org/10.7554/eLife.63377

Further reading

    1. Neuroscience
    Jacob A Miller
    Insight

    When navigating environments with changing rules, human brain circuits flexibly adapt how and where we retain information to help us achieve our immediate goals.

    1. Neuroscience
    Zhujun Shao, Mengya Zhang, Qing Yu
    Research Article

    When holding visual information temporarily in working memory (WM), the neural representation of the memorandum is distributed across various cortical regions, including visual and frontal cortices. However, the role of stimulus representation in visual and frontal cortices during WM has been controversial. Here, we tested the hypothesis that stimulus representation persists in the frontal cortex to facilitate flexible control demands in WM. During functional MRI, participants flexibly switched between simple WM maintenance of visual stimulus or more complex rule-based categorization of maintained stimulus on a trial-by-trial basis. Our results demonstrated enhanced stimulus representation in the frontal cortex that tracked demands for active WM control and enhanced stimulus representation in the visual cortex that tracked demands for precise WM maintenance. This differential frontal stimulus representation traded off with the newly-generated category representation with varying control demands. Simulation using multi-module recurrent neural networks replicated human neural patterns when stimulus information was preserved for network readout. Altogether, these findings help reconcile the long-standing debate in WM research, and provide empirical and computational evidence that flexible stimulus representation in the frontal cortex during WM serves as a potential neural coding scheme to accommodate the ever-changing environment.