Animals vocalize only in certain behavioral contexts, but the circuits and synapses through which forebrain neurons trigger or suppress vocalization remain unknown. Here we used transsynaptic tracing to identify two populations of inhibitory neurons that lie upstream of neurons in the periaqueductal gray that gate the production of ultrasonic vocalizations in mice (i.e., PAG-USV neurons). Activating PAG-projecting neurons in the preoptic hypothalamus (POAPAG neurons) elicited USV production in the absence of social cues. In contrast, activating PAG-projecting neurons in the central-medial boundary zone of the amygdala (AmgC/M-PAG neurons) transiently suppressed USV production without disrupting non-vocal social behavior. Optogenetics-assisted circuit mapping in brain slices revealed that POAPAG neurons directly inhibit PAG interneurons, which in turn inhibit PAG-USV neurons, whereas AmgC/M-PAG neurons directly inhibit PAG-USV neurons. These experiments identify two major forebrain inputs to the PAG that trigger and suppress vocalization, respectively, while also establishing the synaptic mechanisms through which these neurons exert opposing behavioral effects.
Data have been deposited to the Duke Research Data Repository, under the DOI: 10.7924/r4cz38d99. We have deposited 4 types of data in the repository: (1) confocal microscope images of in situ hybridization, (2) audio and video files from the mice used in this study, (3) slice electrophysiology data, and (4) custom Matlab codes used for data analysis. All other data analyzed in this study are included in the manuscript and supporting files.
Data and scripts from: Circuit and synaptic organization of forebrain-to-midbrain pathways that promote and suppress vocalization.Duke Research Data Repository, doi:10.7924/r4cz38d99.
- Richard Mooney
- Fan Wang
- Richard Mooney
- Valerie Michael
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Animal experimentation: All experiments were conducted according to protocols approved by the Duke University Institutional Animal Care and Use Committee protocol (# A227-17-09).
- Catherine Emily Carr, University of Maryland, United States
© 2020, Michael et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Ultrasonic vocalizations (USVs) fulfill an important role in communication and navigation in many species. Because of their social and affective significance, rodent USVs are increasingly used as a behavioral measure in neurodevelopmental and neurolinguistic research. Reliably attributing USVs to their emitter during close interactions has emerged as a difficult, key challenge. If addressed, all subsequent analyses gain substantial confidence. We present a hybrid ultrasonic tracking system, Hybrid Vocalization Localizer (HyVL), that synergistically integrates a high-resolution acoustic camera with high-quality ultrasonic microphones. HyVL is the first to achieve millimeter precision (~3.4–4.8 mm, 91% assigned) in localizing USVs, ~3× better than other systems, approaching the physical limits (mouse snout ~10 mm). We analyze mouse courtship interactions and demonstrate that males and females vocalize in starkly different relative spatial positions, and that the fraction of female vocalizations has likely been overestimated previously due to imprecise localization. Further, we find that when two male mice interact with one female, one of the males takes a dominant role in the interaction both in terms of the vocalization rate and the location relative to the female. HyVL substantially improves the precision with which social communication between rodents can be studied. It is also affordable, open-source, easy to set up, can be integrated with existing setups, and reduces the required number of experiments and animals.
How does the human brain combine information across the eyes? It has been known for many years that cortical normalization mechanisms implement ‘ocularity invariance’: equalizing neural responses to spatial patterns presented either monocularly or binocularly. Here, we used a novel combination of electrophysiology, psychophysics, pupillometry, and computational modeling to ask whether this invariance also holds for flickering luminance stimuli with no spatial contrast. We find dramatic violations of ocularity invariance for these stimuli, both in the cortex and also in the subcortical pathways that govern pupil diameter. Specifically, we find substantial binocular facilitation in both pathways with the effect being strongest in the cortex. Near-linear binocular additivity (instead of ocularity invariance) was also found using a perceptual luminance matching task. Ocularity invariance is, therefore, not a ubiquitous feature of visual processing, and the brain appears to repurpose a generic normalization algorithm for different visual functions by adjusting the amount of interocular suppression.