DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels
Abstract
Videos of animal behavior are used to quantify researcher-defined behaviors-of-interest to study neural function, gene mutations, and pharmacological therapies. Behaviors-of-interest are often scored manually, which is time-consuming, limited to few behaviors, and variable across researchers. We created DeepEthogram: software that uses supervised machine learning to convert raw video pixels into an ethogram, the behaviors-of-interest present in each video frame. DeepEthogram is designed to be general-purpose and applicable across species, behaviors, and video-recording hardware. It uses convolutional neural networks to compute motion, extract features from motion and images, and classify features into behaviors. Behaviors are classified with above 90% accuracy on single frames in videos of mice and flies, matching expert-level human performance. DeepEthogram accurately predicts rare behaviors, requires little training data, and generalizes across subjects. A graphical interface allows beginning-to-end analysis without end-user programming. DeepEthogram's rapid, automatic, and reproducible labeling of researcher-defined behaviors-of-interest may accelerate and enhance supervised behavior analysis.
Data availability
Code is posted publicly on Github and linked in the paper. Video datasets and human annotations are publicly available and linked in the paper.
Article and author information
Author details
Funding
National Institutes of Health (R01MH107620)
- Christopher D Harvey
National Science Foundation (GRFP)
- Nivanthika K Wimalasena
Fundacao para a Ciencia ea Tecnologia (PD/BD/105947/2014)
- Tomás Cruz
Harvard Medical School Dean's Innovation Award
- Christopher D Harvey
Harvard Medical School Goldenson Research Award
- Christopher D Harvey
National Institutes of Health (DP1 MH125776)
- Christopher D Harvey
National Institutes of Health (R01NS089521)
- Christopher D Harvey
National Institutes of Health (R01NS108410)
- Christopher D Harvey
National Institutes of Health (F31NS108450)
- James P Bohnslav
National Institutes of Health (R35NS105076)
- Clifford J Woolf
National Institutes of Health (R01AT011447)
- Clifford J Woolf
National Institutes of Health (R00NS101057)
- Lauren L Orefice
National Institutes of Health (K99DE028360)
- David A Yarmolinsky
European Research Council (ERC-Stg-759782)
- M Eugenia Chiappe
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All experimental procedures were approved by the Institutional Animal Care and Use Committees at Boston Children's Hospital (protocol numbers 17-06-3494R and 19-01-3809R) or Massachusetts General Hospital (protocol number 2018N000219) and were performed in compliance with the Guide for the Care and Use of Laboratory Animals.
Copyright
© 2021, Bohnslav et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 14,358
- views
-
- 1,356
- downloads
-
- 123
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
When navigating environments with changing rules, human brain circuits flexibly adapt how and where we retain information to help us achieve our immediate goals.
-
- Neuroscience
When holding visual information temporarily in working memory (WM), the neural representation of the memorandum is distributed across various cortical regions, including visual and frontal cortices. However, the role of stimulus representation in visual and frontal cortices during WM has been controversial. Here, we tested the hypothesis that stimulus representation persists in the frontal cortex to facilitate flexible control demands in WM. During functional MRI, participants flexibly switched between simple WM maintenance of visual stimulus or more complex rule-based categorization of maintained stimulus on a trial-by-trial basis. Our results demonstrated enhanced stimulus representation in the frontal cortex that tracked demands for active WM control and enhanced stimulus representation in the visual cortex that tracked demands for precise WM maintenance. This differential frontal stimulus representation traded off with the newly-generated category representation with varying control demands. Simulation using multi-module recurrent neural networks replicated human neural patterns when stimulus information was preserved for network readout. Altogether, these findings help reconcile the long-standing debate in WM research, and provide empirical and computational evidence that flexible stimulus representation in the frontal cortex during WM serves as a potential neural coding scheme to accommodate the ever-changing environment.