A functional topography within the cholinergic basal forebrain for encoding sensory cues and behavioral reinforcement outcomes

  1. Blaise Robert
  2. Eyal Y Kimchi
  3. Yurika Watanabe
  4. Tatenda Chakoma
  5. Miao Jing
  6. Yulong Li
  7. Daniel B Polley  Is a corresponding author
  1. Massachusetts Eye and Ear Infirmary, United States
  2. Massachusetts General Hospital, United States
  3. Chinese Institute for Brain Research, China
  4. Peiking University School of Life Sciences, China

Abstract

Basal forebrain cholinergic neurons (BFCNs) project throughout the cortex to regulate arousal, stimulus salience, plasticity, and learning. Although often treated as a monolithic structure, the basal forebrain features distinct connectivity along its rostrocaudal axis that could impart regional differences in BFCN processing. Here, we performed simultaneous bulk calcium imaging from rostral and caudal BFCNs over a one-month period of variable reinforcement learning in mice. BFCNs in both regions showed equivalently weak responses to unconditioned visual stimuli and anticipated rewards. Rostral BFCNs in the horizontal limb of the diagonal band were more responsive to reward omission, more accurately classified behavioral outcomes, and more closely tracked fluctuations in pupil-indexed global brain state. Caudal tail BFCNs in globus pallidus and substantia innominata were more responsive to unconditioned auditory stimuli, orofacial movements, aversive reinforcement, and showed robust associative plasticity for punishment-predicting cues. These results identify a functional topography that diversifies cholinergic modulatory signals broadcast to downstream brain regions.

Data availability

Figure 1 - Source Data 1 contains the data for Figure 1D. All data generated or analyzed during this study are available on Mendeley Data (doi:10.17632/d8tjdxyjcm.2)

The following data sets were generated

Article and author information

Author details

  1. Blaise Robert

    Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7945-8775
  2. Eyal Y Kimchi

    Department of Neurology, Massachusetts General Hospital, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Yurika Watanabe

    Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Tatenda Chakoma

    Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    Competing interests
    The authors declare that no competing interests exist.
  5. Miao Jing

    Chinese Institute for Brain Research, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  6. Yulong Li

    State Key Laboratory of Membrane Biology, Peiking University School of Life Sciences, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  7. Daniel B Polley

    Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    For correspondence
    Daniel_Polley@meei.harvard.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5120-2409

Funding

National Institute on Deafness and Other Communication Disorders (DC017078)

  • Daniel B Polley

The Nancy Lurie Marks Family Foundation

  • Daniel B Polley

Herchel Smith Harvard Scholarship

  • Blaise Robert

Fondation Zdenek et Michaela Bakala Scholarship

  • Blaise Robert

National Institute of Mental Health (K08MH116135)

  • Eyal Y Kimchi

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Animal experimentation: All procedures were approved by the Massachusetts Eye and Ear Animal Care and Use Committee (protocol #10-03-006A) and followed the guidelines established by the National Institutes of Health for the care and use of laboratory animals.

Copyright

© 2021, Robert et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,725
    views
  • 419
    downloads
  • 42
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Blaise Robert
  2. Eyal Y Kimchi
  3. Yurika Watanabe
  4. Tatenda Chakoma
  5. Miao Jing
  6. Yulong Li
  7. Daniel B Polley
(2021)
A functional topography within the cholinergic basal forebrain for encoding sensory cues and behavioral reinforcement outcomes
eLife 10:e69514.
https://doi.org/10.7554/eLife.69514

Share this article

https://doi.org/10.7554/eLife.69514

Further reading

    1. Neuroscience
    Kiichi Watanabe, Hui Chiu, David J Anderson
    Tools and Resources

    Monitoring neuronal activity at single-cell resolution in freely moving Drosophila engaged in social behaviors is challenging because of their small size and lack of transparency. Extant methods, such as Flyception, are highly invasive. Whole-brain calcium imaging in head-fixed, walking flies is feasible but the animals cannot perform the consummatory phases of social behaviors like aggression or mating under these conditions. This has left open the fundamental question of whether neurons identified as functionally important for such behaviors using loss- or gain-of-function screens are actually active during the natural performance of such behaviors, and if so during which phase(s). Here, we perform brain-wide mapping of active cells expressing the Immediate Early Gene hr38 using a high-sensitivity/low background fluorescence in situ hybridization (FISH) amplification method called HCR-3.0. Using double-labeling for hr38 mRNA and for GFP, we describe the activity of several classes of aggression-promoting neurons during courtship and aggression, including P1a cells, an intensively studied population of male-specific interneurons. Using HI-FISH in combination with optogenetic activation of aggression-promoting neurons (opto-HI-FISH), we identify candidate downstream functional targets of these cells in a brain-wide, unbiased manner. Finally, we compare the activity of P1a neurons during sequential performance of courtship and aggression, using intronic vs. exonic hr38 probes to differentiate newly synthesized nuclear transcripts from cytoplasmic transcripts synthesized at an earlier time. These data provide evidence suggesting that different subsets of P1a neurons may be active during courtship vs. aggression. HI-FISH and associated methods may help to fill an important lacuna in the armamentarium of tools for neural circuit analysis in Drosophila.

    1. Neuroscience
    Nicolas Langer, Maurice Weber ... Ce Zhang
    Tools and Resources

    Memory deficits are a hallmark of many different neurological and psychiatric conditions. The Rey–Osterrieth complex figure (ROCF) is the state-of-the-art assessment tool for neuropsychologists across the globe to assess the degree of non-verbal visual memory deterioration. To obtain a score, a trained clinician inspects a patient’s ROCF drawing and quantifies deviations from the original figure. This manual procedure is time-consuming, slow and scores vary depending on the clinician’s experience, motivation, and tiredness. Here, we leverage novel deep learning architectures to automatize the rating of memory deficits. For this, we collected more than 20k hand-drawn ROCF drawings from patients with various neurological and psychiatric disorders as well as healthy participants. Unbiased ground truth ROCF scores were obtained from crowdsourced human intelligence. This dataset was used to train and evaluate a multihead convolutional neural network. The model performs highly unbiased as it yielded predictions very close to the ground truth and the error was similarly distributed around zero. The neural network outperforms both online raters and clinicians. The scoring system can reliably identify and accurately score individual figure elements in previously unseen ROCF drawings, which facilitates explainability of the AI-scoring system. To ensure generalizability and clinical utility, the model performance was successfully replicated in a large independent prospective validation study that was pre-registered prior to data collection. Our AI-powered scoring system provides healthcare institutions worldwide with a digital tool to assess objectively, reliably, and time-efficiently the performance in the ROCF test from hand-drawn images.