1. Neuroscience
Download icon

Distinct signals in medial and lateral VTA dopamine neurons modulate fear extinction at different times

Research Article
  • Cited 9
  • Views 3,309
  • Annotations
Cite this article as: eLife 2020;9:e54936 doi: 10.7554/eLife.54936

Abstract

Dopamine (DA) neurons are known to encode reward prediction error (RPE), in addition to other signals, such as salience. While RPE is known to support learning, the role of salience in supporting learning remains less clear. To address this, we recorded and manipulated VTA DA neurons in mice during fear extinction, a behavior we observed to generate spatially segregated RPE and salience signals. We applied deep learning to classify mouse freezing behavior, eliminating the need for human scoring. Our fiber photometry recordings showed that DA neurons in medial and lateral VTA have distinct activity profiles during fear extinction: medial VTA activity more closely reflected RPE, while lateral VTA activity more closely reflected a salience-like signal. Optogenetic inhibition of DA neurons in either region slowed fear extinction, with the relevant time period for inhibition differing across regions. Our results indicate that salience-like signals can have similar downstream consequences to RPE-like signals, although with different temporal dependencies.

Data availability

All data generated or analysed during this study will be included in the manuscript as supporting files. Code for all steps is available on GitHub:https://github.com/neurocaience/deepfreeze/ (Cai et al. 2020)

Article and author information

Author details

  1. Lili X Cai

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  2. Katherine Pizano

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Gregory W Gundersen

    Computer Science, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Cameron L Hayes

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-0388-5807
  5. Weston T Fleming

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  6. Sebastian Holt

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  7. Julia M Cox

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  8. Ilana B Witten

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    For correspondence
    iwitten@princeton.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0548-2160

Funding

NIH (T32MH065214)

  • Lili X Cai

NYSCF

  • Ilana B Witten

ARO (W911NF1710554)

  • Ilana B Witten

NIH (1R01MH106689-01A1)

  • Ilana B Witten

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Animal experimentation: All experiments followed guidelines established by the National Institutes of Health and reviewed by Princeton University Institutional Animals Care and Use Committee (IACUC protocol 1876-18).

Reviewing Editor

  1. Naoshige Uchida, Harvard University, United States

Publication history

  1. Received: January 7, 2020
  2. Accepted: June 5, 2020
  3. Accepted Manuscript published: June 10, 2020 (version 1)
  4. Accepted Manuscript updated: June 11, 2020 (version 2)
  5. Version of Record published: July 15, 2020 (version 3)
  6. Version of Record updated: July 27, 2020 (version 4)

Copyright

© 2020, Cai et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,309
    Page views
  • 505
    Downloads
  • 9
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Casey Paquola et al.
    Tools and Resources Updated

    Neuroimaging stands to benefit from emerging ultrahigh-resolution 3D histological atlases of the human brain; the first of which is ‘BigBrain’. Here, we review recent methodological advances for the integration of BigBrain with multi-modal neuroimaging and introduce a toolbox, ’BigBrainWarp’, that combines these developments. The aim of BigBrainWarp is to simplify workflows and support the adoption of best practices. This is accomplished with a simple wrapper function that allows users to easily map data between BigBrain and standard MRI spaces. The function automatically pulls specialised transformation procedures, based on ongoing research from a wide collaborative network of researchers. Additionally, the toolbox improves accessibility of histological information through dissemination of ready-to-use cytoarchitectural features. Finally, we demonstrate the utility of BigBrainWarp with three tutorials and discuss the potential of the toolbox to support multi-scale investigations of brain organisation.

    1. Neuroscience
    Gabriella R Sterne et al.
    Tools and Resources Updated

    Neural circuits carry out complex computations that allow animals to evaluate food, select mates, move toward attractive stimuli, and move away from threats. In insects, the subesophageal zone (SEZ) is a brain region that receives gustatory, pheromonal, and mechanosensory inputs and contributes to the control of diverse behaviors, including feeding, grooming, and locomotion. Despite its importance in sensorimotor transformations, the study of SEZ circuits has been hindered by limited knowledge of the underlying diversity of SEZ neurons. Here, we generate a collection of split-GAL4 lines that provides precise genetic targeting of 138 different SEZ cell types in adult Drosophila melanogaster, comprising approximately one third of all SEZ neurons. We characterize the single-cell anatomy of these neurons and find that they cluster by morphology into six supergroups that organize the SEZ into discrete anatomical domains. We find that the majority of local SEZ interneurons are not classically polarized, suggesting rich local processing, whereas SEZ projection neurons tend to be classically polarized, conveying information to a limited number of higher brain regions. This study provides insight into the anatomical organization of the SEZ and generates resources that will facilitate further study of SEZ neurons and their contributions to sensory processing and behavior.