Neural variability determines coding strategies for natural self-motion in macaque monkeys
We have previously reported that central neurons mediating vestibulo-spinal reflexes and self-motion perception optimally encode natural self-motion (Mitchell et al., 2018). Importantly however, the vestibular nuclei also comprise other neuronal classes that mediate essential functions such as the vestibulo-ocular reflex (VOR) and its adaptation. Here we show that heterogeneities in resting discharge variability mediate a trade-off between faithful encoding and optimal coding via temporal whitening. Specifically, neurons displaying lower variability did not whiten naturalistic self-motion but instead faithfully represented the stimulus' detailed time course, while neurons displaying higher variability displayed temporal whitening. Using a well-established model of VOR pathways, we demonstrate that faithful stimulus encoding is necessary to generate the compensatory eye movements found experimentally during naturalistic self-motion. Our findings suggest a novel functional role for variability towards establishing different coding strategies: 1) faithful stimulus encoding for generating the VOR; 2) optimized coding via temporal whitening for other vestibular functions.
Data is available on figshare: 10.6084/m9.figshare.12594803.
Article and author information
Canadian Institutes of Health Research (162285)
- Jérome Carriot
- Kathleen E Cullen
- Maurice J Chacron
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Animal experimentation: All experimental protocols were approved by the McGill University Animal Care Committee (#4096) and complied with the guidelines of the Canadian Council on Animal Care.
- Fred Rieke, University of Washington, United States
- Received: April 2, 2020
- Accepted: September 10, 2020
- Accepted Manuscript published: September 11, 2020 (version 1)
- Version of Record published: September 28, 2020 (version 2)
© 2020, Mackrous et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
- Page views
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Sensory neurons previously shown to optimize speed and balance in fish by providing information about the curvature of the spine show similar morphology and connectivity in mice.