Automated visual tracking of animals is rapidly becoming an indispensable tool for the study of behavior. It offers a quantitative methodology by which organisms' sensing and decision-making can be studied in a wide range of ecological contexts. Despite this, existing solutions tend to be challenging to deploy in practice, especially when considering long and/or high-resolution video-streams. Here, we present TRex, a fast and easy-to-use solution for tracking a large number of individuals simultaneously using background-subtraction with real-time (60Hz) tracking performance for up to approximately 256 individuals and estimates 2D visual-fields, outlines, and head/rear of bilateral animals, both in open and closed-loop contexts. Additionally, TRex offers highly-accurate, deep-learning-based visual identification of up to approximately 100 unmarked individuals, where it is between 2.5-46.7 times faster, and requires 2-10 times less memory, than comparable software (with relative performance increasing for more organisms/longer videos) and provides interactive data-exploration within an intuitive, platform-independent graphical user-interface.
Video data that has been used in the evaluation of TRex has been deposited in MPG Open Access Data Repository (Edmond), under the Creative Commons BY 4.0 license, at https://dx.doi.org/10.17617/3.4yMost raw videos have been trimmed, since original files are each up to 200GB in size. Pre-processed versions (in PV format) are included, so that all steps after conversion can be reproduced directly (conversion speeds do not change with video length, so proportional results are reproducible as well). Full raw videos are made available upon reasonable request.All analysis scripts, scripts used to process the original videos, and the source code/pre-compiled binaries (linux-64) that have been used, are archived in this repository. Most intermediate data (PV videos, log files, tracking data, etc.) are included, and the binaries along with the scripts can be used to automatically generate all intermediate steps. The application source code is available for free under https://github.com/mooch443/trex.Videos 11, 12 and 13 are part of idtracker.ai's example videos: URL https://drive.google.com/file/d/1pAR6oJjrEn7jf_OU2yMdyT2UJZMTNoKC/view?usp=sharing (10_zebrafish.tar.gz) [Francisco Romero, 2018, Examples for idtracker.ai, Online, Accessed 23-Oct-2020];Video 7 (video_example_100fish_1min.avi): URL https://drive.google.com/file/d/1Tl64CHrQoc05PDElHvYGzjqtybQc4g37/view?usp=sharing [Francisco Romero, 2018, Examples for idtracker.ai, Online, Accessed 23-Oct-2020];V1 from Appendix 12: https://drive.google.com/drive/folders/1Nir2fzgxofz-fcojEiG_JCNXsGQXj_9k [Francisco Romero, 2018, Examples for idtracker.ai, Online, Accessed 09-Feb-2021];
- Iain D Couzin
- Iain D Couzin
- Iain D Couzin
- Iain D Couzin
- Iain D Couzin
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Animal experimentation: We herewith confirm that the care and use of animals described in this work is covered by the protocols 35-9185.81/G-17/162, 35-9185.81/G-17/88 and 35-9185.81/G-16/116 granted by the Regional Council of the State of Baden-Württemberg, Freiburg, Germany, to the Max Planck Institute of Animal Behavior in accordance with the German Animal Welfare Act (TierSchG) and the Regulation for the Protection of Animals Used for Experimental or Other Scientific Purposes (Animal Welfare Regulation Governing Experimental Animals - TierSchVersV).
- David Lentink, Stanford University, United States
© 2021, Walter & Couzin
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Previous research has highlighted the role of glutamate and gamma-aminobutyric acid (GABA) in perceptual, cognitive, and motor tasks. However, the exact involvement of these neurochemical mechanisms in the chain of information processing, and across human development, is unclear. In a cross-sectional longitudinal design, we used a computational approach to dissociate cognitive, decision, and visuomotor processing in 293 individuals spanning early childhood to adulthood. We found that glutamate and GABA within the intraparietal sulcus (IPS) explained unique variance in visuomotor processing, with higher glutamate predicting poorer visuomotor processing in younger participants but better visuomotor processing in mature participants, while GABA showed the opposite pattern. These findings, which were neurochemically, neuroanatomically and functionally specific, were replicated ~21 mo later and were generalized in two further different behavioral tasks. Using resting functional MRI, we revealed that the relationship between IPS neurochemicals and visuomotor processing is mediated by functional connectivity in the visuomotor network. We then extended our findings to high-level cognitive behavior by predicting fluid intelligence performance. We present evidence that fluid intelligence performance is explained by IPS GABA and glutamate and is mediated by visuomotor processing. However, this evidence was obtained using an uncorrected alpha and needs to be replicated in future studies. These results provide an integrative biological and psychological mechanistic explanation that links cognitive processes and neurotransmitters across human development and establishes their potential involvement in intelligent behavior.
Cerebellar climbing fibers convey diverse signals, but how they are organized in the compartmental structure of the cerebellar cortex during learning remains largely unclear. We analyzed a large amount of coordinate-localized two-photon imaging data from cerebellar Crus II in mice undergoing ‘Go/No-go’ reinforcement learning. Tensor component analysis revealed that a majority of climbing fiber inputs to Purkinje cells were reduced to only four functional components, corresponding to accurate timing control of motor initiation related to a Go cue, cognitive error-based learning, reward processing, and inhibition of erroneous behaviors after a No-go cue. Changes in neural activities during learning of the first two components were correlated with corresponding changes in timing control and error learning across animals, indirectly suggesting causal relationships. Spatial distribution of these components coincided well with boundaries of Aldolase-C/zebrin II expression in Purkinje cells, whereas several components are mixed in single neurons. Synchronization within individual components was bidirectionally regulated according to specific task contexts and learning stages. These findings suggest that, in close collaborations with other brain regions including the inferior olive nucleus, the cerebellum, based on anatomical compartments, reduces dimensions of the learning space by dynamically organizing multiple functional components, a feature that may inspire new-generation AI designs.