Fast two-photon imaging of subcellular voltage dynamics in neuronal tissue with genetically encoded indicators
Abstract
Monitoring voltage dynamics in defined neurons deep in the brain is critical for unraveling the function of neuronal circuits, but is challenging due to the limited performance of existing tools. In particular, while genetically encoded voltage indicators have shown promise for optical detection of voltage transients, many indicators exhibit low sensitivity when imaged under two-photon illumination. Previous studies thus fell short of visualizing voltage dynamics in individual neurons in single trials. Here, we report ASAP2s, a novel voltage indicator with improved sensitivity. By imaging ASAP2s using random-access multi-photon microscopy, we demonstrate robust single-trial detection of action potentials in organotypic slice cultures. We also show that ASAP2s enables two-photon imaging of graded potentials with subcellular resolution in organotypic slice cultures and in Drosophila. These results demonstrate that the combination of ASAP2s and fast two-photon imaging methods enables detection of neural electrical activity with subcellular spatial resolution and millisecond-timescale precision.
Article and author information
Author details
Funding
Burroughs Wellcome Fund
- Michael Z Lin
Rita Allen Foundation
- Michael Z Lin
Stanford University (Graduate and Interdisciplinary Graduate Fellowships)
- Helen H Yang
Stanford University (Stanford Neuroscience Microscopy Service pilot grant)
- Michael Z Lin
- François St-Pierre
Canadian Institutes of Health Research (MOP-81142)
- Katalin Toth
Natural Sciences and Engineering Research Council of Canada (RGPIN-2015-06266)
- Katalin Toth
Natural Sciences and Engineering Research Council of Canada (Graduate fellowship)
- Simon Chamberland
National Institutes of Health (1U01NS090600)
- Joseph C Wu
National Institutes of Health (HL12652701)
- Joseph C Wu
National Institutes of Health (R01 EY022638)
- Thomas R Clandinin
National Institutes of Health (R21 NS081507)
- Thomas R Clandinin
National Science Foundation (1707359)
- François St-Pierre
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
- Kristin Scott, University of California, Berkeley, Berkeley, United States
Ethics
Animal experimentation: Animal experiments were performed in accordance with either (1) the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health and the guidelines of the Stanford Institutional Animal Care and Use Committee under Protocol APLAC-23407, or (2) the guidelines for animal welfare of the Canadian Council on Animal Care and protocols approved by the Université Laval Animal Protection Committee (protocol number 2014-149-3). All surgery was performed under sodium pentobarbital anesthesia, and every effort was made to minimize suffering.
Version history
- Received: February 3, 2017
- Accepted: July 21, 2017
- Accepted Manuscript published: July 27, 2017 (version 1)
- Version of Record published: September 5, 2017 (version 2)
- Version of Record updated: September 14, 2017 (version 3)
Copyright
© 2017, Chamberland et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 14,544
- views
-
- 2,162
- downloads
-
- 161
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (‘invariance’), represented in non-interfering subspaces of population activity (‘factorization’) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.
-
- Neuroscience
The sensorimotor system can recalibrate itself without our conscious awareness, a type of procedural learning whose computational mechanism remains undefined. Recent findings on implicit motor adaptation, such as over-learning from small perturbations and fast saturation for increasing perturbation size, challenge existing theories based on sensory errors. We argue that perceptual error, arising from the optimal combination of movement-related cues, is the primary driver of implicit adaptation. Central to our theory is the increasing sensory uncertainty of visual cues with increasing perturbations, which was validated through perceptual psychophysics (Experiment 1). Our theory predicts the learning dynamics of implicit adaptation across a spectrum of perturbation sizes on a trial-by-trial basis (Experiment 2). It explains proprioception changes and their relation to visual perturbation (Experiment 3). By modulating visual uncertainty in perturbation, we induced unique adaptation responses in line with our model predictions (Experiment 4). Overall, our perceptual error framework outperforms existing models based on sensory errors, suggesting that perceptual error in locating one’s effector, supported by Bayesian cue integration, underpins the sensorimotor system’s implicit adaptation.