Fast and accurately characterizing animal behaviors is crucial for neuroscience research. Deep learning models are efficiently used in laboratories for behavior analysis. However, it has not been achieved to use an end-to-end unsupervised neural network to extract comprehensive and discriminative features directly from social behavior video frames for annotation and analysis purposes. Here, we report a self-supervised feature extraction (Selfee) convolutional neural network with multiple downstream applications to process video frames of animal behavior in an end-to-end way. Visualization and classification of the extracted features (Meta-representations) validate that Selfee processes animal behaviors in a way similar to human perception. We demonstrate that Meta-representations can be efficiently used to detect anomalous behaviors that are indiscernible to human observation and hint in-depth analysis. Furthermore, time-series analyses of Meta-representations reveal the temporal dynamics of animal behaviors. In conclusion, we present a self-supervised learning approach to extract comprehensive and discriminative features directly from raw video recordings of animal behaviors and demonstrate its potential usage for various downstream applications.
Major data used in this study were uploaded to Dryad, including pretrained weights. Data could be accessed via:https://doi.org/10.5061/dryad.brv15dvb8.With the uploaded dataset and pretrained weights, our experiments could be replicated. However, due to its huge size and the limited internet service resources, we are currently not able to share our full training dataset. The full dataset is as large as 400GB, which is hard to upload to a public server and will be difficult for others users to download.For training dataset, it would be available from the corresponding author upon reasonable request (firstname.lastname@example.org), and then we can discuss how to transfer the dataset. No project proposal is needed as long as the dataset is not used for any commercial purpose.Our Python scripts could be accessed on GitHub: https://github.com/EBGU/SelfeeOther software used in our project include ImageJ(https://imagej.net/software/fiji/) and GraphPad Prism(https://www.graphpad.com/).All data used to plot graphs and charts in the manuscript can be fully accessed on Dryad (DOI 10.5061/dryad.brv15dvb8).
Data from: Selfee: Self-supervised features extraction of animal behaviorsDryad Digital Repository, doi:10.5061/dryad.brv15dvb8.
- Wei Zhang
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Animal experimentation: All mating experiments were approved by the Animal Care and Use Committee of the Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China (IACUC No. NA-016-2016)All studies and experimental protocols of CIS and OFT were approved by Institutional Animal Care and Use Committee (IACUC) at Tsinghua University (No. 19-ZY1). Experiments were performed using the principles outlined in the Guide for the Care and Use of Laboratory Animals of Tsinghua University.
- Marta Zlatic, MRC Laboratory of Molecular Biology, United Kingdom
- Received: December 8, 2021
- Accepted: June 15, 2022
- Accepted Manuscript published: June 16, 2022 (version 1)
© 2022, Jia et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Sleep strongly affects synaptic strength, making it critical for cognition, especially learning and memory formation. Whether and how sleep deprivation modulates human brain physiology and cognition is not well understood. Here we examined how overnight sleep deprivation vs overnight sufficient sleep affects (a) cortical excitability, measured by transcranial magnetic stimulation, (b) inducibility of long-term potentiation (LTP)- and long-term depression (LTD)-like plasticity via transcranial direct current stimulation (tDCS), and (c) learning, memory, and attention. The results suggest that sleep deprivation upscales cortical excitability due to enhanced glutamate-related cortical facilitation and decreases and/or reverses GABAergic cortical inhibition. Furthermore, tDCS-induced LTP-like plasticity (anodal) abolishes while the inhibitory LTD-like plasticity (cathodal) converts to excitatory LTP-like plasticity under sleep deprivation. This is associated with increased EEG theta oscillations due to sleep pressure. Finally, we show that learning and memory formation, behavioral counterparts of plasticity, and working memory and attention, which rely on cortical excitability, are impaired during sleep deprivation. Our data indicate that upscaled brain excitability and altered plasticity, due to sleep deprivation, are associated with impaired cognitive performance. Besides showing how brain physiology and cognition undergo changes (from neurophysiology to higher-order cognition) under sleep pressure, the findings have implications for variability and optimal application of noninvasive brain stimulation.