Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
Abstract
Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.
Data availability
Dataset 1---------Online, publicly available MUPET dataset: ~5GB Available at: https://github.com/mvansegbroeck/mupet/wiki/MUPET-wiki Figs: 2, 3, 4d-eDataset 2----------Single zebra finch data: ~200-400 MB of audio generated as part of work in progress in Mooney Lab. Figs: 2e-f, 4a-c, 5a, 5d, 6b-eDataset 3---------Mouse USV dataset: ~30-40 GB of audio generated as part of work in progress in Mooney Lab. Figs: 4fDataset 5---------This is a subset of dataset 3, taken from a single mouse: ~1GB of audio. Figs: 5b-e, 6aDataset 6---------10 zebra finch pupil/tutor pairs: ~60 GB of audio generated as part of work in progress in Mooney Lab. Figs: 7Upon acceptance, all Datasets 2-6 will be archived in the Duke Digital Repository (https://research.repository.duke.edu). DOI in process.
Article and author information
Author details
Funding
National Institute of Mental Health (R01-MH117778)
- Richard Mooney
National Institute of Neurological Disorders and Stroke (R01-NS118424)
- Richard Mooney
- John Pearson
National Institute on Deafness and Other Communication Disorders (R01-DC013826)
- Richard Mooney
- John Pearson
National Institute of Neurological Disorders and Stroke (R01-NS099288)
- Richard Mooney
Eunice Kennedy Shriver National Institute of Child Health and Human Development (F31-HD098772)
- Samuel Brudner
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All data generated in conjunction for this study were generated by experiments performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to approved institutional animal care and use committee (IACUC) protocols of Duke University, protocol numbers A171-20-08 and A172-20-08.
Copyright
© 2021, Goffinet et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 5,219
- views
-
- 595
- downloads
-
- 80
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Citations by DOI
-
- 80
- citations for umbrella DOI https://doi.org/10.7554/eLife.67855