Transcriptomic encoding of sensorimotor transformation in the midbrain
Abstract
Sensorimotor transformation, a process that converts sensory stimuli into motor actions, is critical for the brain to initiate behaviors. Although the circuitry involved in sensorimotor transformation has been well delineated, the molecular logic behind this process remains poorly understood. Here, we performed high-throughput and circuit-specific single-cell transcriptomic analyses of neurons in the superior colliculus (SC), a midbrain structure implicated in early sensorimotor transformation. We found that SC neurons in distinct laminae express discrete marker genes. Of particular interest, Cbln2 and Pitx2 are key markers that define glutamatergic projection neurons in the optic nerve (Op) and intermediate gray (InG) layers, respectively. The Cbln2+ neurons responded to visual stimuli mimicking cruising predators, while the Pitx2+ neurons encoded prey-derived vibrissal tactile cues. By forming distinct input and output connections with other brain areas, these neuronal subtypes independently mediate behaviors of predator avoidance and prey capture. Our results reveal that, in the midbrain, sensorimotor transformation for different behaviors may be performed by separate circuit modules that are molecularly defined by distinct transcriptomic codes.
Data availability
The scRNA-seq data used in this study have been deposited in the Gene Expression Omnibus (GEO) under accession numbers GSE162404 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE162404).
-
Revealing the molecular mechanism of mouse innate behavior by single-cell sequencingNCBI Gene Expression Omnibus, GSE162404.
Article and author information
Author details
Funding
Ministry of Science and Technology of the People's Republic of China (2019YFA0110100; 2017YFA0103303)
- Xiaoqun Wang
Ministry of Science and Technology of the People's Republic of China (2017YFA0102601)
- Qian Wu
Chinese Academy of Sciences (XDB32010100)
- Xiaoqun Wang
National Natural Science Foundation of China (31925019)
- Peng Cao
National Natural Science Foundation of China (31771140; 81891001)
- Xiaoqun Wang
BUAA-CCMU Big Data and Precision Medicine Advanced Innovation Center Project (BHME-2019001)
- Xiaoqun Wang
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All experimental procedures were conducted following protocols approved by the Administrative Panel on Laboratory Animal Care at the National Institute of Biological Sciences, Beijing (NIBS) (NIBS2021M0006) and Institute of Biophysics, Chinese Academy of Sciences (SYXK2019015).
Reviewing Editor
- Sacha B Nelson, Brandeis University, United States
Version history
- Received: April 27, 2021
- Preprint posted: April 28, 2021 (view preprint)
- Accepted: July 25, 2021
- Accepted Manuscript published: July 28, 2021 (version 1)
- Version of Record published: August 5, 2021 (version 2)
Copyright
© 2021, Xie et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 3,538
- Page views
-
- 669
- Downloads
-
- 18
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Ultrasonic vocalizations (USVs) fulfill an important role in communication and navigation in many species. Because of their social and affective significance, rodent USVs are increasingly used as a behavioral measure in neurodevelopmental and neurolinguistic research. Reliably attributing USVs to their emitter during close interactions has emerged as a difficult, key challenge. If addressed, all subsequent analyses gain substantial confidence. We present a hybrid ultrasonic tracking system, Hybrid Vocalization Localizer (HyVL), that synergistically integrates a high-resolution acoustic camera with high-quality ultrasonic microphones. HyVL is the first to achieve millimeter precision (~3.4–4.8 mm, 91% assigned) in localizing USVs, ~3× better than other systems, approaching the physical limits (mouse snout ~10 mm). We analyze mouse courtship interactions and demonstrate that males and females vocalize in starkly different relative spatial positions, and that the fraction of female vocalizations has likely been overestimated previously due to imprecise localization. Further, we find that when two male mice interact with one female, one of the males takes a dominant role in the interaction both in terms of the vocalization rate and the location relative to the female. HyVL substantially improves the precision with which social communication between rodents can be studied. It is also affordable, open-source, easy to set up, can be integrated with existing setups, and reduces the required number of experiments and animals.
-
- Neuroscience
How does the human brain combine information across the eyes? It has been known for many years that cortical normalization mechanisms implement ‘ocularity invariance’: equalizing neural responses to spatial patterns presented either monocularly or binocularly. Here, we used a novel combination of electrophysiology, psychophysics, pupillometry, and computational modeling to ask whether this invariance also holds for flickering luminance stimuli with no spatial contrast. We find dramatic violations of ocularity invariance for these stimuli, both in the cortex and also in the subcortical pathways that govern pupil diameter. Specifically, we find substantial binocular facilitation in both pathways with the effect being strongest in the cortex. Near-linear binocular additivity (instead of ocularity invariance) was also found using a perceptual luminance matching task. Ocularity invariance is, therefore, not a ubiquitous feature of visual processing, and the brain appears to repurpose a generic normalization algorithm for different visual functions by adjusting the amount of interocular suppression.