1. Computational and Systems Biology
  2. Ecology
Download icon

TRex, a fast multi-animal tracking system with markerless identification, and 2D estimation of posture and visual fields

  1. Tristan Walter  Is a corresponding author
  2. Iain D Couzin  Is a corresponding author
  1. Max Planck Institute of Animal Behavior, Germany
Tools and Resources
  • Cited 1
  • Views 1,751
  • Annotations
Cite this article as: eLife 2021;10:e64000 doi: 10.7554/eLife.64000


Automated visual tracking of animals is rapidly becoming an indispensable tool for the study of behavior. It offers a quantitative methodology by which organisms' sensing and decision-making can be studied in a wide range of ecological contexts. Despite this, existing solutions tend to be challenging to deploy in practice, especially when considering long and/or high-resolution video-streams. Here, we present TRex, a fast and easy-to-use solution for tracking a large number of individuals simultaneously using background-subtraction with real-time (60Hz) tracking performance for up to approximately 256 individuals and estimates 2D visual-fields, outlines, and head/rear of bilateral animals, both in open and closed-loop contexts. Additionally, TRex offers highly-accurate, deep-learning-based visual identification of up to approximately 100 unmarked individuals, where it is between 2.5-46.7 times faster, and requires 2-10 times less memory, than comparable software (with relative performance increasing for more organisms/longer videos) and provides interactive data-exploration within an intuitive, platform-independent graphical user-interface.

Article and author information

Author details

  1. Tristan Walter

    Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany
    For correspondence
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8604-7229
  2. Iain D Couzin

    Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany
    For correspondence
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8556-4558


Division of Integrative Organismal Systems (IOS-1355061)

  • Iain D Couzin

Office of Naval Research (N00014-19-1-2556)

  • Iain D Couzin

Deutsche Forschungsgemeinschaft (EXC 2117-422037984)

  • Iain D Couzin


  • Iain D Couzin

Struktur- und Innovationsfunds fuer die Forschung of the State of Baden-Wuerttemberg

  • Iain D Couzin

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.


Animal experimentation: We herewith confirm that the care and use of animals described in this work is covered by the protocols 35-9185.81/G-17/162, 35-9185.81/G-17/88 and 35-9185.81/G-16/116 granted by the Regional Council of the State of Baden-Württemberg, Freiburg, Germany, to the Max Planck Institute of Animal Behavior in accordance with the German Animal Welfare Act (TierSchG) and the Regulation for the Protection of Animals Used for Experimental or Other Scientific Purposes (Animal Welfare Regulation Governing Experimental Animals - TierSchVersV).

Reviewing Editor

  1. David Lentink, Stanford University, United States

Publication history

  1. Received: October 13, 2020
  2. Accepted: February 25, 2021
  3. Accepted Manuscript published: February 26, 2021 (version 1)


© 2021, Walter & Couzin

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.


  • 1,751
    Page views
  • 308
  • 1

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Shivesh Chaudhary et al.
    Research Article Updated

    Although identifying cell names in dense image stacks is critical in analyzing functional whole-brain data enabling comparison across experiments, unbiased identification is very difficult, and relies heavily on researchers’ experiences. Here, we present a probabilistic-graphical-model framework, CRF_ID, based on Conditional Random Fields, for unbiased and automated cell identification. CRF_ID focuses on maximizing intrinsic similarity between shapes. Compared to existing methods, CRF_ID achieves higher accuracy on simulated and ground-truth experimental datasets, and better robustness against challenging noise conditions common in experimental data. CRF_ID can further boost accuracy by building atlases from annotated data in highly computationally efficient manner, and by easily adding new features (e.g. from new strains). We demonstrate cell annotation in Caenorhabditis elegans images across strains, animal orientations, and tasks including gene-expression localization, multi-cellular and whole-brain functional imaging experiments. Together, these successes demonstrate that unbiased cell annotation can facilitate biological discovery, and this approach may be valuable to annotation tasks for other systems.

    1. Cell Biology
    2. Computational and Systems Biology
    Ryan Conrad, Kedar Narayan
    Tools and Resources

    Automated segmentation of cellular electron microscopy (EM) datasets remains a challenge. Supervised deep learning (DL) methods that rely on region-of-interest (ROI) annotations yield models that fail to generalize to unrelated datasets. Newer unsupervised DL algorithms require relevant pre-training images, however, pre-training on currently available EM datasets is computationally expensive and shows little value for unseen biological contexts, as these datasets are large and homogeneous. To address this issue, we present CEM500K, a nimble 25 GB dataset of 0.5 × 106 unique 2D cellular EM images curated from nearly 600 three-dimensional (3D) and 10,000 two-dimensional (2D) images from >100 unrelated imaging projects. We show that models pre-trained on CEM500K learn features that are biologically relevant and resilient to meaningful image augmentations. Critically, we evaluate transfer learning from these pre-trained models on six publicly available and one newly derived benchmark segmentation task and report state-of-the-art results on each. We release the CEM500K dataset, pre-trained models and curation pipeline for model building and further expansion by the EM community. Data and code are available at https://www.ebi.ac.uk/pdbe/emdb/empiar/entry/10592/ and https://git.io/JLLTz.