High-throughput synapse-resolving two-photon fluorescence microendoscopy for deep-brain volumetric imaging in vivo

  1. Guanghan Meng
  2. Yajie Liang
  3. Sarah Sarsfield
  4. Wan-chen Jiang
  5. Rongwen Lu
  6. Joshua Tate Dudman
  7. Yeka Aponte
  8. Na Ji  Is a corresponding author
  1. University of California, Berkeley, United States
  2. Janelia Research Campus, Howard Hughes Medical Institute, United States
  3. National Institute on Drug Abuse, United States
  4. Johns Hopkins University School of Medicine, United States

Abstract

Optical imaging has become a powerful tool for studying brains in vivo. The opacity of adult brains makes microendoscopy, with an optical probe such as a gradient index (GRIN) lens embedded into brain tissue to provide optical relay, the method of choice for imaging neurons and neural activity in deeply buried brain structures. Incorporating a Bessel focus scanning module into two-photon fluorescence microendoscopy, we extended the excitation focus axially and improved its lateral resolution. Scanning the Bessel focus in 2D, we imaged volumes of neurons at high-throughput while resolving fine structures such as synaptic terminals. We applied this approach to the volumetric anatomical imaging of dendritic spines and axonal boutons in the mouse hippocampus, and functional imaging of GABAergic neurons in the mouse lateral hypothalamus in vivo.

Data availability

Almost all data needed to evaluate the conclusions in the paper are present in the paper or the supplementary materials; Raw image data for Figs. 2, 4 & 9 are available from Dryad, 10.5061/dryad.pr4t978

The following data sets were generated

Article and author information

Author details

  1. Guanghan Meng

    Department of Physics, University of California, Berkeley, Berkeley, United States
    Competing interests
    No competing interests declared.
  2. Yajie Liang

    Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, United States
    Competing interests
    No competing interests declared.
  3. Sarah Sarsfield

    Neuronal Circuits and Behavior Unit, National Institute on Drug Abuse, Baltimore, United States
    Competing interests
    No competing interests declared.
  4. Wan-chen Jiang

    Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, United States
    Competing interests
    No competing interests declared.
  5. Rongwen Lu

    Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, United States
    Competing interests
    Rongwen Lu, The Bessel focus scanning intellectual property has been licensed to Thorlabs, Inc. by HHMI.
  6. Joshua Tate Dudman

    Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, United States
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4436-1057
  7. Yeka Aponte

    Solomon H Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, United States
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5967-2579
  8. Na Ji

    Department of Physics, University of California, Berkeley, Berkeley, United States
    For correspondence
    jina@berkeley.edu
    Competing interests
    Na Ji, The Bessel focus scanning intellectual property has been licensed to Thorlabs, Inc. by HHMI.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5527-1663

Funding

Howard Hughes Medical Institute

  • Guanghan Meng
  • Yajie Liang
  • Wan-chen Jiang
  • Rongwen Lu
  • Joshua Tate Dudman
  • Na Ji

National Institute of Neurological Disorders and Stroke

  • Guanghan Meng
  • Na Ji

National Institute on Drug Abuse

  • Sarah Sarsfield
  • Yeka Aponte

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. David Kleinfeld, University of California, San Diego, United States

Ethics

Animal experimentation: All animal experiments were conducted according to the United States National Institutes of Health guidelines for animal research. Procedures and protocols were approved by the Institutional Animal Care and Use Committee at Janelia Research Campus, Howard Hughes Medical Institute (protocol number: 16-147)

Version history

  1. Received: August 5, 2018
  2. Accepted: December 20, 2018
  3. Accepted Manuscript published: January 3, 2019 (version 1)
  4. Accepted Manuscript updated: January 4, 2019 (version 2)
  5. Version of Record published: January 18, 2019 (version 3)

Copyright

© 2019, Meng et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 11,359
    views
  • 1,598
    downloads
  • 76
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Guanghan Meng
  2. Yajie Liang
  3. Sarah Sarsfield
  4. Wan-chen Jiang
  5. Rongwen Lu
  6. Joshua Tate Dudman
  7. Yeka Aponte
  8. Na Ji
(2019)
High-throughput synapse-resolving two-photon fluorescence microendoscopy for deep-brain volumetric imaging in vivo
eLife 8:e40805.
https://doi.org/10.7554/eLife.40805

Share this article

https://doi.org/10.7554/eLife.40805

Further reading

    1. Neuroscience
    Jack W Lindsey, Elias B Issa
    Research Article

    Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (‘invariance’), represented in non-interfering subspaces of population activity (‘factorization’) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.

    1. Neuroscience
    Zhaoran Zhang, Huijun Wang ... Kunlin Wei
    Research Article

    The sensorimotor system can recalibrate itself without our conscious awareness, a type of procedural learning whose computational mechanism remains undefined. Recent findings on implicit motor adaptation, such as over-learning from small perturbations and fast saturation for increasing perturbation size, challenge existing theories based on sensory errors. We argue that perceptual error, arising from the optimal combination of movement-related cues, is the primary driver of implicit adaptation. Central to our theory is the increasing sensory uncertainty of visual cues with increasing perturbations, which was validated through perceptual psychophysics (Experiment 1). Our theory predicts the learning dynamics of implicit adaptation across a spectrum of perturbation sizes on a trial-by-trial basis (Experiment 2). It explains proprioception changes and their relation to visual perturbation (Experiment 3). By modulating visual uncertainty in perturbation, we induced unique adaptation responses in line with our model predictions (Experiment 4). Overall, our perceptual error framework outperforms existing models based on sensory errors, suggesting that perceptual error in locating one’s effector, supported by Bayesian cue integration, underpins the sensorimotor system’s implicit adaptation.