Comprehensive machine learning analysis of Hydra behavior reveals a stable behavioral repertoire

  1. Shuting Han  Is a corresponding author
  2. Ekaterina Taralova
  3. Christophe Dupre
  4. Rafael Yuste  Is a corresponding author
  1. Columbia University, United States

Abstract

Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify behavior. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, the limitation of human vision and the slow speed of annotating behavioral data. Here we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning approaches. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control which could have been already present in the earliest nervous systems.

Data availability

The following data sets were generated

Article and author information

Author details

  1. Shuting Han

    Department of Biological Sciences, Columbia University, New York, United States
    For correspondence
    sh3276@columbia.edu
    Competing interests
    The authors declare that no competing interests exist.
  2. Ekaterina Taralova

    Department of Biological Sciences, Columbia University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Christophe Dupre

    Department of Biological Sciences, Columbia University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5929-8492
  4. Rafael Yuste

    Department of Biological Sciences, Columbia University, New York, United States
    For correspondence
    rmy5@columbia.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4206-497X

Funding

Defense Advanced Research Projects Agency (HR0011-17-C-0026)

  • Rafael Yuste

Howard Hughes Medical Institute (Howard Hughes Medical Institute International Student Research Fellowship)

  • Shuting Han

Grass Foundation (Grass Fellowship)

  • Christophe Dupre

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Ronald L Calabrese, Emory University, United States

Version history

  1. Received: October 9, 2017
  2. Accepted: March 23, 2018
  3. Accepted Manuscript published: March 28, 2018 (version 1)
  4. Version of Record published: April 27, 2018 (version 2)

Copyright

© 2018, Han et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 8,191
    views
  • 885
    downloads
  • 54
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Shuting Han
  2. Ekaterina Taralova
  3. Christophe Dupre
  4. Rafael Yuste
(2018)
Comprehensive machine learning analysis of Hydra behavior reveals a stable behavioral repertoire
eLife 7:e32605.
https://doi.org/10.7554/eLife.32605

Share this article

https://doi.org/10.7554/eLife.32605

Further reading

    1. Neuroscience
    Ivan Tomić, Paul M Bays
    Research Article

    Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or ‘iconic’ memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store.

    1. Neuroscience
    Emilio Salinas, Bashirul I Sheikh
    Insight

    Our ability to recall details from a remembered image depends on a single mechanism that is engaged from the very moment the image disappears from view.