Targeted sensors for glutamatergic neurotransmission
Abstract
Optical report of neurotransmitter release allows visualization of excitatory synaptic transmission. Sensitive genetically-encoded glutamate reporters operating with a range of affinities and emission wavelengths are available. However, without targeting to synapses, the specificity of the fluorescent signal is uncertain, compared to sensors directed at vesicles or other synaptic markers. We fused the state-of-the-art reporter iGluSnFR to glutamate receptor auxiliary proteins in order to target it to postsynaptic sites. Chimeras of Stargazin and gamma-8 that we named SnFR-γ2 and SnFR-γ8, were enriched at synapses, retained function and reported spontaneous glutamate release in rat hippocampal cells, with apparently diffraction-limited spatial precision. In autaptic mouse neurons cultured on astrocytic micro islands, evoked neurotransmitter release could be quantitatively detected at tens of synapses in a field of view whilst evoked currents were recorded simultaneously. These experiments revealed a specific postsynaptic deficit from Stargazin overexpression, resulting in synapses with normal neurotransmitter release but without postsynaptic responses. This defect was reverted by delaying overexpression. By working at different calcium concentrations, we determined that SnFR-γ2 is a linear reporter of the global quantal parameters and short-term synaptic plasticity, whereas iGluSnFR is not. On average, half of iGluSnFR regions of interest showing evoked fluorescence changes had intense rundown, whereas less than 5% of SnFR-γ2 ROIs did. We provide an open-source analysis suite for extracting quantal parameters including release probability from fluorescence time series of individual and grouped synaptic responses. Taken together, postsynaptic targeting improves several properties of iGluSnFR and further demonstrates the importance of subcellular targeting for optogenetic actuators and reporters.
Data availability
Custom software is available at GitHub.com/agplested/saft
Article and author information
Author details
Funding
Deutsche Forschungsgemeinschaft (390688087)
- Christian Rosenmund
- Andrew J R Plested
European Research Council (647895)
- Andrew J R Plested
Deutsche Forschungsgemeinschaft (323514590)
- Andrew J R Plested
Deutsche Forschungsgemeinschaft (446182550)
- Andrew J R Plested
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: Animal housing and use were in compliance with, and approved by, the Animal Welfare Committee of Charité Medical University and the Berlin State Government Agency for Health and Social Services (Licenses T0220/09 and FMP_T 03/20). Newborn C57BLJ6/N mice (P0-P2) and Rats (P1-P3) of both sexes were used for all the experiments.
Copyright
© 2023, Hao et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 3,094
- views
-
- 412
- downloads
-
- 10
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Computational and Systems Biology
- Neuroscience
Audiovisual information reaches the brain via both sustained and transient input channels, representing signals’ intensity over time or changes thereof, respectively. To date, it is unclear to what extent transient and sustained input channels contribute to the combined percept obtained through multisensory integration. Based on the results of two novel psychophysical experiments, here we demonstrate the importance of the transient (instead of the sustained) channel for the integration of audiovisual signals. To account for the present results, we developed a biologically inspired, general-purpose model for multisensory integration, the multisensory correlation detectors, which combines correlated input from unimodal transient channels. Besides accounting for the results of our psychophysical experiments, this model could quantitatively replicate several recent findings in multisensory research, as tested against a large collection of published datasets. In particular, the model could simultaneously account for the perceived timing of audiovisual events, multisensory facilitation in detection tasks, causality judgments, and optimal integration. This study demonstrates that several phenomena in multisensory research that were previously considered unrelated, all stem from the integration of correlated input from unimodal transient channels.
-
- Neuroscience
Processing pathways between sensory and default mode network (DMN) regions support recognition, navigation, and memory but their organisation is not well understood. We show that functional subdivisions of visual cortex and DMN sit at opposing ends of parallel streams of information processing that support visually mediated semantic and spatial cognition, providing convergent evidence from univariate and multivariate task responses, intrinsic functional and structural connectivity. Participants learned virtual environments consisting of buildings populated with objects, drawn from either a single semantic category or multiple categories. Later, they made semantic and spatial context decisions about these objects and buildings during functional magnetic resonance imaging. A lateral ventral occipital to fronto-temporal DMN pathway was primarily engaged by semantic judgements, while a medial visual to medial temporal DMN pathway supported spatial context judgements. These pathways had distinctive locations in functional connectivity space: the semantic pathway was both further from unimodal systems and more balanced between visual and auditory-motor regions compared with the spatial pathway. When semantic and spatial context information could be integrated (in buildings containing objects from a single category), regions at the intersection of these pathways responded, suggesting that parallel processing streams interact at multiple levels of the cortical hierarchy to produce coherent memory-guided cognition.