Postsynaptic GluA3 subunits are required for appropriate assembly of AMPA receptor GluA2 and GluA4 subunits on mammalian cochlear afferent synapses and for presynaptic ribbon modiolar-pillar morphological distinctions
Abstract
Cochlear sound encoding depends on α-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptors (AMPARs), but reliance on specific pore-forming subunits is unknown. With 5-week-old male C57BL/6J Gria3 knockout mice (i.e., subunit GluA3KO) we determined cochlear function, synapse ultrastructure and AMPAR molecular anatomy at ribbon synapses between inner hair cells (IHCs) and spiral ganglion neurons. GluA3KO and wild-type (GluA3WT) mice reared in ambient sound pressure level (SPL) of 55-75 dB had similar auditory brainstem response (ABR) thresholds, wave-1 amplitudes and latencies. Postsynaptic densities (PSDs), presynaptic ribbons, and synaptic vesicle sizes were all larger on the modiolar side of the IHCs from GluA3WT, but not GluA3KO, demonstrating GluA3 is required for modiolar-pillar synapse differentiation. Presynaptic ribbons juxtaposed with postsynaptic GluA2/4 subunits were similar in quantity, however, lone ribbons were more frequent in GluA3KO and GluA2-lacking synapses were observed only in GluA3KO. GluA2 and GluA4 immunofluorescence volumes were smaller on the pillar side than the modiolar side in GluA3KO, despite increased pillar-side PSD size. Overall, the fluorescent puncta volumes of GluA2 and GluA4 were smaller in GluA3KO than GluA3WT. However, GluA3KO contained less GluA2 and greater GluA4 immunofluorescence intensity relative to GluA3WT (3-fold greater mean GluA4:GluA2 ratio). Thus, GluA3 is essential in development, as germline disruption of Gria3 caused anatomical synapse pathology before cochlear output became symptomatic by ABR. We propose the hearing loss in older male GluA3KO mice results from progressive synaptopathy evident in 5-week-old mice as decreased abundance of GluA2 subunits and an increase in GluA2-lacking, GluA4-monomeric Ca2+-permeable AMPARs.
Data availability
All data analyzed during this study are included in the manuscript
Article and author information
Author details
Funding
National Institutes of Health (DC013048)
- Maria Eulalia Rubio
National Institutes of Health (DC14712)
- Mark A Rutherford
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to approved institutional animal care and use committee (IACUC) protocols (#21100176 and #22030822) of the University of Pittsburgh. The protocol was approved by the Committee on the Ethics of Animal Experiments of the University of Pittsburgh (Permit Number: D16-00118). All auditory brainstem recordings were performed under isoflurane anesthesia. All the transcardial perfusions were performed under ketamine and xylazine anesthesia, and every effort was made to minimize suffering.
Copyright
© 2023, Rutherford et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,100
- views
-
- 221
- downloads
-
- 10
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Memory deficits are a hallmark of many different neurological and psychiatric conditions. The Rey–Osterrieth complex figure (ROCF) is the state-of-the-art assessment tool for neuropsychologists across the globe to assess the degree of non-verbal visual memory deterioration. To obtain a score, a trained clinician inspects a patient’s ROCF drawing and quantifies deviations from the original figure. This manual procedure is time-consuming, slow and scores vary depending on the clinician’s experience, motivation, and tiredness. Here, we leverage novel deep learning architectures to automatize the rating of memory deficits. For this, we collected more than 20k hand-drawn ROCF drawings from patients with various neurological and psychiatric disorders as well as healthy participants. Unbiased ground truth ROCF scores were obtained from crowdsourced human intelligence. This dataset was used to train and evaluate a multihead convolutional neural network. The model performs highly unbiased as it yielded predictions very close to the ground truth and the error was similarly distributed around zero. The neural network outperforms both online raters and clinicians. The scoring system can reliably identify and accurately score individual figure elements in previously unseen ROCF drawings, which facilitates explainability of the AI-scoring system. To ensure generalizability and clinical utility, the model performance was successfully replicated in a large independent prospective validation study that was pre-registered prior to data collection. Our AI-powered scoring system provides healthcare institutions worldwide with a digital tool to assess objectively, reliably, and time-efficiently the performance in the ROCF test from hand-drawn images.
-
- Neuroscience
During rest and sleep, memory traces replay in the brain. The dialogue between brain regions during replay is thought to stabilize labile memory traces for long-term storage. However, because replay is an internally-driven, spontaneous phenomenon, it does not have a ground truth - an external reference that can validate whether a memory has truly been replayed. Instead, replay detection is based on the similarity between the sequential neural activity comprising the replay event and the corresponding template of neural activity generated during active locomotion. If the statistical likelihood of observing such a match by chance is sufficiently low, the candidate replay event is inferred to be replaying that specific memory. However, without the ability to evaluate whether replay detection methods are successfully detecting true events and correctly rejecting non-events, the evaluation and comparison of different replay methods is challenging. To circumvent this problem, we present a new framework for evaluating replay, tested using hippocampal neural recordings from rats exploring two novel linear tracks. Using this two-track paradigm, our framework selects replay events based on their temporal fidelity (sequence-based detection), and evaluates the detection performance using each event's track discriminability, where sequenceless decoding across both tracks is used to quantify whether the track replaying is also the most likely track being reactivated.