Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigation

Abstract

We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye-movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector-coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.

Data availability

Data and code are available at: https://osf.io/d7wtz/.

The following data sets were generated

Article and author information

Author details

  1. Jean-Paul Noel

    Center for Neural Science, New York University, New York City, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5297-3363
  2. Edoardo Balzani

    Center for Neural Science, New York University, New York City, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3702-5856
  3. Eric Avila

    Center for Neural Science, New York University, New York City, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Kaushik Janakiraman Lakshminarasimhan

    Center for Theoretical Neuroscience, Columbia University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  5. Stefania Bruni

    Center for Neural Science, New York University, New York City, United States
    Competing interests
    The authors declare that no competing interests exist.
  6. Panos Alefantis

    Center for Neural Science, New York University, New York City, United States
    Competing interests
    The authors declare that no competing interests exist.
  7. Cristina Savin

    Center for Neural Science, New York University, New York City, United States
    Competing interests
    The authors declare that no competing interests exist.
  8. Dora E Angelaki

    Center for Neural Science, New York University, New York, United States
    For correspondence
    da93@nyu.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9650-8962

Funding

National Institutes of Health (1U19 NS118246)

  • Dora E Angelaki

National Institutes of Health (1R01 NS120407)

  • Dora E Angelaki

National Institutes of Health (1R01 DC004260)

  • Dora E Angelaki

National Institutes of Health (1R01MH125571)

  • Cristina Savin

National Science Foundation (1922658)

  • Cristina Savin

Google faculty award

  • Cristina Savin

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Animal experimentation: All surgeries and procedures were approved by the Institutional Animal Care and Use Committee at Baylor College of Medicine and New York University and were in accordance with National Institutes of Health guidelines.

Copyright

© 2022, Noel et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,010
    views
  • 326
    downloads
  • 18
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Jean-Paul Noel
  2. Edoardo Balzani
  3. Eric Avila
  4. Kaushik Janakiraman Lakshminarasimhan
  5. Stefania Bruni
  6. Panos Alefantis
  7. Cristina Savin
  8. Dora E Angelaki
(2022)
Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigation
eLife 11:e80280.
https://doi.org/10.7554/eLife.80280

Share this article

https://doi.org/10.7554/eLife.80280

Further reading

    1. Neuroscience
    Magdalena Solyga, Georg B Keller
    Research Article

    Our movements result in predictable sensory feedback that is often multimodal. Based on deviations between predictions and actual sensory input, primary sensory areas of cortex have been shown to compute sensorimotor prediction errors. How prediction errors in one sensory modality influence the computation of prediction errors in another modality is still unclear. To investigate multimodal prediction errors in mouse auditory cortex, we used a virtual environment to experimentally couple running to both self-generated auditory and visual feedback. Using two-photon microscopy, we first characterized responses of layer 2/3 (L2/3) neurons to sounds, visual stimuli, and running onsets and found responses to all three stimuli. Probing responses evoked by audiomotor (AM) mismatches, we found that they closely resemble visuomotor (VM) mismatch responses in visual cortex (V1). Finally, testing for cross modal influence on AM mismatch responses by coupling both sound amplitude and visual flow speed to the speed of running, we found that AM mismatch responses were amplified when paired with concurrent VM mismatches. Our results demonstrate that multimodal and non-hierarchical interactions shape prediction error responses in cortical L2/3.

    1. Neuroscience
    Gyeong Hee Pyeon, Hyewon Cho ... Yong Sang Jo
    Research Article Updated

    Recent studies suggest that calcitonin gene-related peptide (CGRP) neurons in the parabrachial nucleus (PBN) represent aversive information and signal a general alarm to the forebrain. If CGRP neurons serve as a true general alarm, their activation would modulate both passive nad active defensive behaviors depending on the magnitude and context of the threat. However, most prior research has focused on the role of CGRP neurons in passive freezing responses, with limited exploration of their involvement in active defensive behaviors. To address this, we examined the role of CGRP neurons in active defensive behavior using a predator-like robot programmed to chase mice. Our electrophysiological results revealed that CGRP neurons encode the intensity of aversive stimuli through variations in firing durations and amplitudes. Optogenetic activation of CGRP neurons during robot chasing elevated flight responses in both conditioning and retention tests, presumably by amplifying the perception of the threat as more imminent and dangerous. In contrast, animals with inactivated CGRP neurons exhibited reduced flight responses, even when the robot was programmed to appear highly threatening during conditioning. These findings expand the understanding of CGRP neurons in the PBN as a critical alarm system, capable of dynamically regulating active defensive behaviors by amplifying threat perception, and ensuring adaptive responses to varying levels of danger.