Conserved visual capacity of rats under red light
Abstract
Recent studies examine the behavioral capacities of rats and mice with and without visual input, and the neuronal mechanisms underlying such capacities. These animals are assumed to be functionally blind under red light, an assumption that might originate in the fact that they are dichromats who possess ultraviolet and green but not red cones. But the inability to see red as a color does not necessarily rule out form vision based on red light absorption. We measured Long-Evans rats' capacity for visual form discrimination under red light of various wavelength bands. Upon viewing a black and white grating, they had to distinguish between two categories of orientation, horizontal and vertical. Psychometric curves plotting judged orientation versus angle demonstrate the conserved visual capacity of rats under red light. Investigations aiming to explore rodent physiological and behavioral functions in the absence of visual input should not assume red-light blindness.
Data availability
All data generated or analyzed during this study will be included in the manuscript and supporting files. Source code files will be provided for Figures 1 and 2 at https://github.com/nadernik/nikbakht_diamond_elife
Article and author information
Author details
Funding
European Research Council (294498)
- Mathew E Diamond
Human Frontier Science Program (RGP0015/2013)
- Mathew E Diamond
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
- Martin Vinck, Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Germany
Ethics
Animal experimentation: The rats were under the care of a consulting veterinarian. Study protocols conformed to international norms and were approved by the Ethics Committee of SISSA and by the Italian Health Ministry (license numbers 569/2015-PR and 570/2015-PR).
Version history
- Preprint posted: November 6, 2020 (view preprint)
- Received: January 11, 2021
- Accepted: July 19, 2021
- Accepted Manuscript published: July 20, 2021 (version 1)
- Version of Record published: August 10, 2021 (version 2)
Copyright
© 2021, Nikbakht & Diamond
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 5,753
- views
-
- 435
- downloads
-
- 27
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Artificial neural networks (ANNs) are a powerful class of computational models for unravelling neural mechanisms of brain function. However, for neural control of movement, they currently must be integrated with software simulating biomechanical effectors, leading to limiting impracticalities: (1) researchers must rely on two different platforms and (2) biomechanical effectors are not generally differentiable, constraining researchers to reinforcement learning algorithms despite the existence and potential biological relevance of faster training methods. To address these limitations, we developed MotorNet, an open-source Python toolbox for creating arbitrarily complex, differentiable, and biomechanically realistic effectors that can be trained on user-defined motor tasks using ANNs. MotorNet is designed to meet several goals: ease of installation, ease of use, a high-level user-friendly application programming interface, and a modular architecture to allow for flexibility in model building. MotorNet requires no dependencies outside Python, making it easy to get started with. For instance, it allows training ANNs on typically used motor control models such as a two joint, six muscle, planar arm within minutes on a typical desktop computer. MotorNet is built on PyTorch and therefore can implement any network architecture that is possible using the PyTorch framework. Consequently, it will immediately benefit from advances in artificial intelligence through PyTorch updates. Finally, it is open source, enabling users to create and share their own improvements, such as new effector and network architectures or custom task designs. MotorNet’s focus on higher-order model and task design will alleviate overhead cost to initiate computational projects for new researchers by providing a standalone, ready-to-go framework, and speed up efforts of established computational teams by enabling a focus on concepts and ideas over implementation.
-
- Neuroscience
The cerebellum contributes to a diverse array of motor conditions, including ataxia, dystonia, and tremor. The neural substrates that encode this diversity are unclear. Here, we tested whether the neural spike activity of cerebellar output neurons is distinct between movement disorders with different impairments, generalizable across movement disorders with similar impairments, and capable of causing distinct movement impairments. Using in vivo awake recordings as input data, we trained a supervised classifier model to differentiate the spike parameters between mouse models for ataxia, dystonia, and tremor. The classifier model correctly assigned mouse phenotypes based on single-neuron signatures. Spike signatures were shared across etiologically distinct but phenotypically similar disease models. Mimicking these pathophysiological spike signatures with optogenetics induced the predicted motor impairments in otherwise healthy mice. These data show that distinct spike signatures promote the behavioral presentation of cerebellar diseases.