Neural signatures of auditory hypersensitivity following acoustic trauma

  1. Matthew McGill  Is a corresponding author
  2. Ariel E Hight
  3. Yurika L Watanabe
  4. Aravindakshan Parthasarathy
  5. Dongqin Cai
  6. Kameron Clayton
  7. Kenneth E Hancock
  8. Anne Takesian
  9. Sharon G Kujawa
  10. Daniel B Polley
  1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, United States
  2. Division of Medical Sciences, Harvard Medical School, United States
  3. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, United States

Abstract

Neurons in sensory cortex exhibit a remarkable capacity to maintain stable firing rates despite large fluctuations in afferent activity levels. However, sudden peripheral deafferentation in adulthood can trigger an excessive, non-homeostatic cortical compensatory response that may underlie perceptual disorders including sensory hypersensitivity, phantom limb pain, and tinnitus. Here, we show that mice with noise-induced damage of the high-frequency cochlear base were behaviorally hypersensitive to spared mid-frequency tones and to direct optogenetic stimulation of auditory thalamocortical neurons. Chronic two-photon calcium imaging from ACtx pyramidal neurons (PyrNs) revealed an initial stage of spatially diffuse hyperactivity, hyper-correlation, and auditory hyperresponsivity that consolidated around deafferented map regions three or more days after acoustic trauma. Deafferented PyrN ensembles also displayed hypersensitive decoding of spared mid-frequency tones that mirrored behavioral hypersensitivity, suggesting that non-homeostatic regulation of cortical sound intensity coding following sensorineural loss may be an underlying source of auditory hypersensitivity. Excess cortical response gain after acoustic trauma was expressed heterogeneously among individual PyrNs, yet 40% of this variability could be accounted for by each cell’s baseline response properties prior to acoustic trauma. PyrNs with initially high spontaneous activity and gradual monotonic intensity growth functions were more likely to exhibit non-homeostatic excess gain after acoustic trauma. This suggests that while cortical gain changes are triggered by reduced bottom-up afferent input, their subsequent stabilization is also shaped by their local circuit milieu, where indicators of reduced inhibition can presage pathological hyperactivity following sensorineural hearing loss.

Editor's evaluation

This is an important and methodologically compelling paper that provides novel views on the functional plasticity of cortical networks following noise trauma. The combination of cortical recordings, optogenetics, and behavior provides valuable mechanistic insights into the processes that may underlie auditory hypersensitivity after cochlear damage. The extensive and well-illustrated manuscript provides an excellent example of a study in which modern neurophysiology techniques advance comprehension of pathologies related to maladaptive changes in the brain.

https://doi.org/10.7554/eLife.80015.sa0

Introduction

Sensory disorder research typically focuses on the mechanisms underlying inherited or acquired forms of sensory loss. But some of the most common and debilitating sensory disorder phenotypes reflect just the opposite problem; not what cannot be perceived, but what cannot stop being perceived. Sensory overload typically presents as three inter-related features: (1) an irrepressible perception of phantom stimuli that have no physical environmental source, (2) a selective inability to perceptually suppress distracting sensory features, and (3) the perception that moderate stimuli are uncomfortably intense, distressing, or even painful. One or more of these phenotypes are commonly observed in heterogeneous neurological and psychiatric disorders including autism spectrum disorder (Klintwall et al., 2011; Robertson and Baron-Cohen, 2017), post-traumatic stress disorder (Ehlers and Clark, 2000; Garfinkel and Liberzon, 2009), fibromyalgia (Nielsen and Henriksson, 2007; Yunus, 2007), schizophrenia (González-Rodríguez et al., 2021; Luck and Gold, 2008), traumatic brain injury (Nampiaparampil, 2008), sensorineural hearing loss (SNHL) (Auerbach et al., 2014; Hébert et al., 2013; Pienkowski et al., 2014a), attention deficit hyperactivity disorder (Ghanizadeh, 2011), migraine (Goadsby et al., 2017), and also as a consequence of normal aging (Herrmann and Butler, 2021a).

Although overload phenotypes are reported across many sensory modalities, they are most common and debilitating in hearing, where an estimated 14% of adult population continuously perceives a phantom ringing or buzzing sound (i.e., tinnitus) (Shargorodsky et al., 2010), 9% of adults report hypersensitivity, distress, or even pain in response to ordinary environmental sounds (i.e., hyperacusis) (Pienkowski et al., 2014a; Pienkowski et al., 2014b), and 9% of adults seek health care for poor hearing in complex listening environments but do not have hearing loss (Parthasarathy et al., 2020a). While each facet of auditory overload can occur without evidence of peripheral dysfunction, they are common in persons with SNHL arising from age- or noise-induced degeneration of cochlear hair cells and cochlear afferent nerve terminals (for review see Auerbach and Gritton, 2022; Herrmann and Butler, 2021b; Noreña, 2011; Zeng, 2013).

In developing sensory systems, deprivation of peripheral input is registered by central sensory neurons via decreased cytosolic calcium levels, which triggers a cascade of genetic, epigenetic, post-transcriptional, and post-translational processes that collectively adjust the electrical excitability of neurons to restore activity back to the baseline set point range (for review see Harris and Rubel, 2006; Turrigiano, 2012). Compensatory changes in the central auditory pathway can be grouped into three categories: (1) synaptic sensitization via upregulation of receptors for excitatory neurotransmitters (e.g., AMPA receptor scaling) (Balaram et al., 2019; Kotak et al., 2005; Sturm et al., 2017; Teichert et al., 2017); (2) synaptic disinhibition via removal of inhibitory neurotransmitter receptors (Balaram et al., 2019; Richardson et al., 2013; Sanes and Kotak, 2011; Sarro et al., 2008; Sturm et al., 2017); (3) increased intrinsic excitability via changes in the amount and subunit composition of voltage-gated ion channels that set the resting membrane potential, membrane resistance, and spike ‘burstiness’ (Li et al., 2015; Li et al., 2013; Pilati et al., 2012; Wu et al., 2016; Yang et al., 2012). Activity changes arising from these compensatory processes are often described as homeostatic plasticity (Herrmann and Butler, 2021b; Schaette and Kempter, 2006), but in the context of adult cortical plasticity after hearing loss they typically culminate in a failure to maintain neural excitability at a stable set point. Whereas homeostatic changes – by definition – restore neural activity to a baseline activity rate following a perturbation (Turrigiano, 2012), neural gain adjustments following adult-onset hearing loss often over-shoot the mark, producing catastrophic downstream consequences at the level of network excitability and sound perception (Eggermont, 2017; Nahmani and Turrigiano, 2014; Noreña, 2011).

Pinpointing the perceptual consequences of excess central auditory gain requires translating molecular and synaptic changes into measurements that can be made in intact, and even behaving animals. With conventional microelectrode recordings, the synaptic and intrinsic compensatory processes described above manifest as increased spontaneous activity rates, increased spike synchrony, steeper slopes of sound intensity growth functions, and poor adaptation to background noise sources (for recent review see Auerbach and Gritton, 2022; Herrmann and Butler, 2021b). The connection between these extracellular signatures of excess neural gain and auditory perceptual disorders has remained obscure, in part due to a frequent reliance on involuntary behaviors with a relationship to sound perception (for review see Boyen et al., 2015; Brozoski and Bauer, 2016; Campolo et al., 2013; Hayes et al., 2014). Further, in vivo measurements have generally taken the form of acute recordings of local field potentials or unidentified excitatory and inhibitory unit types, often in anesthetized animals, and often without detailed measurements of the peripheral insult or the topographic correspondence between neural recording sites and deafferented map regions.

Here, we developed an approach to make more direct operant behavioral measurements of auditory hypersensitivity in mice with detailed characterizations of cochlear lesions. These behavioral measurements were combined with an optical approach to visualize spontaneous and sound-evoked calcium transients in awake mice from hundreds of individual excitatory neurons spanning the entire topographic map of the primary auditory cortex (A1). By performing these measurements before and after noise-induced SNHL, we were able to return to the same neurons over a 3- to 4-week period to identify baseline response features that could predict whether a given neuron would subsequently exhibit stable, homeostatic activity regulation or non-homeostatic excess gain following acoustic trauma.

Results

Perceptual hypersensitivity following noise-induced high-frequency SNHL

In humans, a steeply sloping high-frequency hearing loss is a telltale signature of SNHL (Allen and Eddins, 2010; Hannula et al., 2011). We reviewed 132,504 case records from visitors to the audiology clinic at our institution and determined that 23% of pure tone audiograms fit the description of high-frequency SNHL (Figure 1A), underscoring that it is a common clinical condition commonly related to tinnitus, abnormal loudness growth, and poor speech intelligibility in noise (Horwitz et al., 2002; Lewis et al., 2020; Moore et al., 1999; Oxenham and Bacon, 2003; Strelcyk and Dau, 2009). To model this hearing loss profile in genetically tractable laboratory mice, we induced SNHL through exposure to narrow-band high-frequency noise (16–32 kHz) at 103 dB SPL for 2 hr. Repeated cochlear function testing before and after noise exposure revealed a sustained elevation of high-frequency thresholds for wave 1 of the auditory brainstem response (ABR) and cochlear distortion product otoacoustic emissions (DPOAEs) with mild or no threshold elevation below 16 kHz (Figure 1B, C).

Electrophysiological, anatomical, and behavioral confirmation of noise-induced, high-frequency sensorineural hearing loss.

(A) In human subjects, analysis of 132,504 pure tone audiograms indicates that 23% of visitors to our audiology clinic present with steeply sloping high-frequency hearing loss. Values represent mean ± standard deviation (SD) hearing thresholds in units of dB hearing loss (HL). In mice, response thresholds for wave 1 of the auditory brainstem response (ABR) (B) and cochlear distortion product otoacoustic emission (DPOAE) (C) measured before and at various time points after acoustic trauma show a permanent threshold shift at high frequencies (two-way repeated measures analyses of variance (ANOVAs), Frequency × Time interaction terms for ABR wave 1 [F = 87.51, p = 6 × 10–26] and DPOAE [F = 46.44, p = 2 × 10–29]). Asterisks denote significant differences between baseline and day 7 + measurements with post hoc pairwise comparisons (p < 0.05). (D) Cochlea immunostained for anti-CtBP2 and anti-GluR2a reveal reduced presynaptic ribbon and post-synaptic glutamate receptor patches, respectively, at high-frequency regions of the cochlea (>22 kHz) in trauma mice compared to sham exposure controls (mixed model ANOVA with Group as a factor, Frequency as a repeated measure, and cochlear synapse count as the outcome measure: Group × Frequency interaction term, F = 22.33, p = 4 × 10–11). Asterisks denote significant differences between Sham and Trauma with post hoc pairwise comparisons (p < 0.05). (E) Loss of outer hair cell (OHC) bodies is limited to the extreme basal regions of the cochlea in noise-exposed animals (Group × Frequency interaction term, F = 11.54, p = 4 × 10–7). (F–G) Anatomical substrates for cochlear threshold shifts (B, C) in more apical cochlear regions can be linked to comparatively subtle OHC stereocilia damage, as visualized by anti-Espin immunolabeling of actin bundle proteins. Cochlear location is approximately 32 kHz. Scale bars represent 10 μm. (H) Schematic depicts the design of a head-fixed Go/NoGo tone detection task. (I) A modified 2-up, 1-down adaptive staircasing approach to study tone detection thresholds. Example data show one run of 8 kHz tone detection, which finishes at six reversals. (J) Logistical fits of 8- and 32 kHz Go probability functions for one mouse measured before, hours after, and 20 days following acoustic trauma. Dotted lines show threshold as determined by the adaptive tracking method. (K) Daily behavioral threshold measurements from 13 mice (N = 7 trauma) over an approximate 3-week time period shows a permanent increase in 32 kHz threshold but not 8 kHz after acoustic trauma. Mixed model ANOVA with Group as a factor and both Frequency and Time as repeated measures, main effects for Group [F = 157.76, p = 8 × 10–8], Frequency [F = 368.87, p = 9 × 10–10], and Time [F = 44.21, p = 6 × 10–53], Group × Frequency × Time interaction [F = 37.98, p = 2 × 10–48].

Post-mortem cochlear histopathology performed 21 days after noise exposure suggested an anatomical substrate for cochlear function changes due to acoustic trauma. Compared to age-matched unexposed control ears, high-frequency noise exposure eliminated approximately 50% of synaptic contacts between inner hair cells (IHCs) and primary cochlear afferent neurons in the high-frequency base of the cochlea (Figure 1D). Noise-induced outer hair cell (OHC) death was only observed in the high-frequency extreme of the cochlear base (Figure 1E), though more subtle OHC stereocilia damage was evident throughout mid- and high-frequency cochlear regions (Figure 1F, G).

To make a more direct comparison to clinical determinations of hearing loss in humans via pure tone behavioral thresholds, we also performed behavioral tone detection in head-fixed mice (Figure 1H). Mice were trained to report their detection of mid- or high-frequency tones (8 and 32 kHz, respectively) by licking a water reward spout shortly after tone delivery. Behavioral thresholds were determined with a modified 2-up, 1-down method-of-limits that presented a combination of liminal tone intensities along with no-tone catch trials (Figure 1I). Behavioral detection thresholds were measured every 1–3 days before and after noise-induced SNHL (trauma, N = 7) and in control mice exposed to an innocuous noise level (sham, N = 6), revealing a stable 45 dB increase in high-frequency tone threshold after traumatic noise exposure without commensurate changes in false alarm rates or mid-frequency detection thresholds (Figure 1J, K, statistical analyses are provided in figure legends).

Behavioral hypersensitivity to cochlear lesion edge frequencies or direct auditory thalamocortical activation after SNHL

High-frequency OHC damage was the likely source of elevated ABR (Figure 1B), DPOAE (Figure 1C), and behavioral detection (Figure 1K) thresholds. The additional loss of auditory nerve afferent synapses onto IHCs (Figure 1D) would not be expected to affect hearing thresholds, but could have other influences on sound perception (Chambers et al., 2016a; Henry and Abrams, 2021; Lobarinas et al., 2013; Resnik and Polley, 2021). For example, we recently reported that human subjects with normal hearing thresholds but asymmetric degeneration of the left and right auditory nerve perceive tones of fixed physical intensity as louder in the ear with poor auditory nerve integrity, particularly for low intensities near sensation level (Jahn et al., 2022). To determine whether mice may be showing evidence of auditory hypersensitivity, we performed a closer inspection of the mouse psychometric detection functions for the spared 8 kHz tone. This analysis confirmed that, following acoustic trauma, the behavioral sensitivity index (d-prime, or d′) grew more steeply for sound intensities near threshold compared both to pre-exposure baseline detection functions and sham-exposed controls (Figure 2A, B). These effects were consistent at the level of individual mice, where we noted an increase in psychometric growth slopes by at least 30% from baseline in 7/7 acoustic trauma mice but changes of this magnitude were not observed in any sham-exposed control mice (Figure 2B, right).

Hypersensitivity to sound and direct auditory thalamocortical stimulation following acoustic trauma.

(A) Behavioral detection functions (reported as the sensitivity index) (d′) across time for sham- and noise-exposed example mice show hypersensitivity to spared, mid-frequency tones. (B) Left, change in perceptual gain for the 8 kHz tone, measured as the mean increase in d′ per dB increase in sound level, relative to baseline performance. Perceptual gain for an 8 kHz tone is increased in acoustic trauma mice (N = 7) compared to sham (N = 6) but does not change over post-exposure time (analysis of variance [ANOVA] with Group as a factor and Time as a repeated measure, main effect for Group, F = 12.42, p = 0.005; main effect for Time, F = 0.52, p = 0.67). Right, mean perceptual gain change collapsed across time is separately plotted for each mouse. (C) Left, preparation for the mixed modality optogenetic and tone detection task. Right, ChR2 expression in auditory thalamocortical neurons and placement of the implanted optic fiber relative to retrogradely labeled cell bodies in the medial geniculate body (MGB). (D) During 32 kHz tone detection blocks, detection thresholds are elevated by approximately 50 dB following trauma (N = 6, paired t-test, p = 0.0001) but no change was noted in sham-exposed mice (N = 3, p = 0.74). (E) Psychometric detection functions for optogenetic auditory thalamocortical stimulation before and after acoustic trauma or sham exposure in two example mice. (F) Left, summed change across the d′ function relative to baseline, for noise- and sham-exposed mice. Individual mice are thin lines, group means are thick lines. Example mice from E are indicated by the arrows. After trauma, mice became hypersensitive to MGB stimulation, suggesting an auditory thalamocortical contribution to perceptual hypersensitivity (ANOVA with Group as a factor and post-exposure Time as a repeated measure; main effect for Group, F = 15.54, p = 0.006; main effect for time, F = 4.65, p = 0.07). Right, d′ collapsed across time is separately plotted for each mouse.

Persons with SNHL commonly report loudness recruitment, a disproportionately steep growth of loudness with increasing sound level, that has been accounted for by altered basilar membrane mechanics after OHC damage (Oxenham and Bacon, 2003). However, loudness recruitment is most pronounced for frequencies within the range of hearing loss and at intensities well above sensation level (Buus and Florentine, 2002; Moore, 2004). Here, we observed steeper growth of auditory sensitivity for a spared frequency and at intensities close to sensation level, neither of which would be expected of a purely peripheral origin.

To investigate the possibility that increased neural gain in the central pathway could contribute to auditory hypersensitivity, we built upon pioneering work that directly stimulated deafferented central auditory nuclei to demonstrate a direct association between neural and behavioral hypersensitivity (Gerken, 1979). Here, we interleaved acoustic and optogenetic stimulation (Guo et al., 2015) to test the hypothesis that mice with high-frequency SNHL would be hypersensitive to stimulation that bypassed the peripheral damage and directly stimulated feedforward excitatory projection neurons in the central auditory pathway. We used an intersectional virus strategy to express channelrhodopsin in auditory thalamocortical neurons, an exclusively glutamatergic feedforward projection system (Hackett et al., 2016; Figure 2C). Using a Go/NoGo operant task, detection probability was tested in alternating blocks for high-frequency bandpass noise (centered at 32 kHz) or optogenetic thalamocortical stimulation. Following acoustic trauma, high-frequency detection thresholds were elevated by approximately 50 dB (N = 6), whereas sham exposure had no comparable effect (N = 3; Figure 2D). Importantly, testing in interleaved trials revealed behavioral hypersensitivity to direct stimulation of auditory thalamocortical neurons after acoustic trauma (Figure 2E), as evidenced by significantly increased d′ at a fixed set of optogenetic stimulation intensities compared to sham controls (Figure 2F). Although acoustic trauma can introduce changes in sound processing throughout all stages of the central pathway, the observation that behavioral hypersensitivity was observed to stimulation of thalamocortical projection neurons suggests that compensatory plasticity within the ACtx would be a reasonable place to investigate the neural underpinnings of auditory hypersensitivity following acoustic trauma.

Chronic imaging in A1 reveals tonotopic remapping and dynamic spatiotemporal adjustments in neural response gain after acoustic trauma

Following topographically restricted cochlear lesions, neurons in deafferented ACtx map regions reorganize to preferentially encode spared cochlear frequencies bordering the damaged region without accompanying elevations in response threshold (Engineer et al., 2011; Noreña et al., 2003; Robertson and Irvine, 1989; Seki and Eggermont, 2003; Yang et al., 2011). Because the degree and form of cortical plasticity following cochlear deafferentation can differ between excitatory neurons and various types of inhibitory neurons (Masri et al., 2021; Resnik and Polley, 2021; Wang et al., 2022), we restricted ACtx activity measurements to excitatory neurons via chronic calcium imaging in Thy1-GCaMP6s × CBA mice, where expression of a high-sensitivity genetically encoded calcium indicator is limited to cortical PyrNs (Chen et al., 2013; Romero et al., 2019). In initial experiments, we performed widefield epifluorescence calcium imaging in awake mice, which offers simultaneous visualization of all fields within the ACtx at mesoscale resolution (Figure 3A). These experiments confirmed that tonotopic maps were relatively stable over ~4 weeks in mice with normal hearing experience (Issa et al., 2014; Romero et al., 2019) but underwent large-scale reorganization throughout the week following acoustic trauma before stabilizing in a state that over-represented 8–16 kHz tones that bordered the high-frequency cochlear lesion (Figure 3B).

Tonotopic remapping within the cortical deafferentation zone revealed by chronic mesoscale and cellular calcium imaging.

(A) Top, approach for widefield calcium imaging using a tandem lens epifluorescence microscope in awake Thy1-GCaMP6s × CBA/CaJ mice that express GCaMP6s in pyramidal neurons (PyrNs). Bottom, schematic depicts the typical arrangement of individual fields within the ACtx based on tonotopic best frequency (BF) gradients (as detailed in Romero et al., 2019). (B) Chronic widefield BF maps in example sham and trauma mice (top and bottom, respectively) show BF remapping within the deafferented high-frequency regions throughout the ACtx after acoustic trauma. (C) Left, approach for chronic two-photon calcium imaging of layer 2/3 PyrN’s along the A1 tonotopic gradient. Right, two-photon imaging field-of-view superimposed on the widefield BF map measured in another example mouse. (D) Top, example of tone-evoked GCaMP transients measured as the fractional change in fluorescence and deconvolved activity. Bottom, peak deconvolved amplitudes for tones of varying frequencies and levels are used to populate the complete frequency-response area and derive the BF (downward arrow) and threshold (leftward arrow) for each neuron. (E) BF arrangements in L2/3 PyrNs measured at three times over the course of a month in representative sham and trauma mice. A support vector machine (SVM) was trained to bisect the low- and high-frequency zones of the A1 BF map (LF [<16 kHz] and HF [≥16 kHz], respectively). The dashed line represents the SVM-derived boundary to segregate the LF and HF regions. The SVM line is determined for each mouse on day −4 and then applied to the same physical location for all future imaging sessions following alignment. (F) Timeline for chronic two-photon imaging and cochlear function testing in each sham and trauma mouse. (G) Individual PyrNs are placed into five distance categories based on their Euclidean distance to the SVM line and the BF of each category is expressed as the median ± bootstrapped error. Following trauma, BFs in the HF zone are remapped to sound frequencies at the edge of the cochlear lesion. (H) Across all tone-responsive PyrNs measured at three time points, the percent of neurons with BFs corresponding to edge frequencies (11.3–16 kHz) was greater in trauma mice (N = 4 mice, n = 1749 PyrNs) than sham (N = 4 mice, n = 1748 PyrNs), was greater in the deafferented HF region than the intact LF region, and increased over time in trauma mice compared to sham controls (three-way analysis of variance [ANOVA] with Group, Region, and Time as factors: main effect for Group, F = 34.29, p = 5 × 10–9; Group × Region interaction term, F = 7.42, p = 0.007; Group × Time interaction term, F = 10.17, p = 0.00004). (I) Competitive expansion of edge frequency BFs in the deafferented HF zone was not accompanied by a change in neural response threshold (three-way ANOVA: main effect for Group, F = 0.8, p = 0.37; Group × Region interaction term, F = 0.93, p = 0.33; Group × Time interaction term, F = 1.33, p = 0.27).

Having confirmed that A1 was a promising target for more detailed characterizations, we changed our approach to two-photon cellular-scale measurements of A1 layer 2/3 neurons (Figure 3C), a cortical layer that shows robust central gain enhancements after acoustic trauma (Novák et al., 2016; Parameshwarappa et al., 2022; Schormans et al., 2019). Individual L2/3 PyrNs exhibited strong tone-evoked transients and well-organized tonal receptive fields (Figure 3D) that were measured simultaneously across hundreds of PyrNs to reveal a coarse low- to high-frequency tonotopic gradient. We used a support vector machine (SVM) to bisect the pre-exposure A1 tonotopic map into a low-frequency intact zone and high-frequency deafferented zone (Figure 3E, top). By returning to the same A1 region for imaging every 1–3 days (Figure 3F), we confirmed that L2/3 PyrNs shifted their preferred frequency toward frequencies bordering the cochlear lesion (Figure 3E, bottom). Analysis of all tone-responsive PyrNs (n = 1749 in four trauma mice; n = 1748 in four sham mice) demonstrated that tonotopic remapping was limited to the deafferented zone (Figure 3G), where the percentage of L2/3 PyrNs preferentially tuned to lesion edge frequencies more than doubled following acoustic trauma (Figure 3H) without any systematic change in response threshold (Figure 3I).

A marked increase in PyrNs tuned to lesion edge frequencies could contribute to the enhanced perceptual sensitivity to 8 kHz tones identified in behavioral experiments (Figure 2). To address the potential association between the neural and perceptual changes more explicitly, we measured the growth of PyrN responses across sound intensities both as a function of where neurons were located relative to the deafferentation boundary and when – following noise exposure – the measurement was made (Figure 4). We first quantified neural response gain as the change in PyrN response with increasing sound intensity, where the particular range of intensities used for the calculation was determined from the overall shape of the growth function (Figure 5A, Figure 5—figure supplement 1A).

Spatial and temporal expression of excess central gain following acoustic trauma.

(A) Response amplitudes to 8 kHz tones presented at varying levels (columns) and days (rows) relative to noise exposure for 185 PyrNs in an example mouse. (B) Intensity-response functions for 66 randomly selected PyrNs recorded on days −4, 1, and 18 relative to the day of noise exposure. (C) Mean tone-evoked responses for all PyrNs relative to the support vector machine (SVM) deafferentation boundary at 50–80 dB SPL plotted separately for each of the 3 days.

Figure 5 with 1 supplement see all
Increased central gain is associated with hypersensitive neural encoding of low-intensity sounds.

(A) Neural gain is measured as the average rate of response growth in the sound level-response function. A detailed description of how neural gain is measured for different types of level-response functions is provided in Figure 5—figure supplement 1. (B) Mean change in response gain to an 8 kHz tone relative to the baseline support vector machine (SVM) demarcation of the low-frequency (LF) intact and high-frequency (HF) deafferented regions of the A1 map from all sham-exposed (top) and all noise-exposed (bottom) mice. (C) Fold change in 8 kHz response gain relative to the pre-exposure period in sham (n = 23,007 PyrNs from four mice) and trauma (n = 23,319 PyrNs from four mice). After acoustic trauma, the response gain for low- and mid-frequency tones is temporarily increased in the intact region. A sustained increase in response gain is observed in the deafferented region, particularly for tone frequencies bordering the cochlear lesion. Four-way analysis of variance (ANOVA) with Group, Region, Time, and Frequency as factors main effects, respectively: F = 111.03, p = 6 × 10–26; F = 0.03, p = 0.87; F = 23.21, p = 4 × 10–19; F = 9.87, p = 2 × 10–6; Group × Region × Time × Frequency interaction term: F = 2.23; p = 0.008. Asterisks denote significant pairwise post hoc differences between groups (p < 0.05). (D) Neural ensemble responses to single trials of sound or silence were decomposed into principal components (PC) and classified with an SVM decoder. The first two PCs are presented from an example mouse 1 day before or after acoustic trauma for an 8 kHz tone. Single trial classification accuracy is provided for each sound intensity. (E) Mean decoding accuracy for 8 and 32 kHz tones across all noise-exposed mice as a function of sound intensity at varying times following acoustic trauma. (F) Mean change in decoding accuracy across all intensities for 8 and 32 kHz tones for L2/3 PyrNs in the intact and deafferented regions of the A1 map. For 8 kHz tones, PyrN ensemble decoding shows sustained improvement in the deafferented region but a temporary improvement in the intact region. Ensemble decoding of 32 kHz tones is reduced for all time points and measurement regions. Dots represent single imaging sessions. Bars denote mean ± standard error of the mean (SEM). Asterisks represent significant differences with unpaired t-tests (p < 0.05).

We noted that neural gain was broadly enhanced across the tonotopic map for the first several days following trauma but then receded to the high-frequency deafferented region in measurements made 1 week or more following noise exposure (Figure 5B). Next, we expanded the neural gain analysis in sham and trauma mice to four stimulus frequencies: a high-frequency tone aligned to the damaged cochlear region (32 kHz), a spared low-frequency tone far from the cochlear lesion (5.7 kHz), and two spared mid-frequency tones near the edge of the cochlear lesion (8 and 11.3 kHz). Neural gain at each of the four test frequencies was measured separately for intact and deafferented cortical zones and expressed in units of fold change relative to baseline gain measured from the corresponding population response prior to noise exposure. This analysis identified several clear results: (1) a strong initial uptick in neural gain measured in both topographic regions following trauma; (2) persistent (lasting greater than 1 week) increases in neural gain were observed only for spared mid-frequency tones in deafferented cortical regions; (3) no significant changes in neural gain were observed in sham-exposed mice (Figure 5C). Thus, excess central gain reflected the interaction of four factors: (1) whether the initial sound exposure induced SNHL, (2) where within the cortical frequency map the cell is located, (3) when relative to exposure the measurement is made, and (4) the proximity of the stimulus test frequency to the cochlear lesion.

Cortical hyperresponsivity and increased gain mirrors behavioral hypersensitivity to spared mid-frequency inputs

In the Go/NoGo tone detection behavior, mice exhibited steepened 8 kHz detection functions after trauma (Figure 2B). Excess gain in A1 L2/3 PyrNs mirrored this result in that steepened neural growth functions were observed for 8 kHz tones over the same timescale. To more directly relate ACtx hyperactivity to perceptual hypersensitivity, we used a decoder to categorize the presence or absence of sound based on single-trial responses from hundreds of simultaneously recorded neurons located either in the intact or deafferented zone of the A1 map. This was accomplished by training an SVM classifier on PCA-decomposed population activity during short periods of tone presentation or silence (Figure 5D). Classification of single trial A1 ensemble responses supported the hypothesis that cortical discrimination of sound versus silence would be enhanced for low- and mid-intensity 8 kHz tones but reduced for 32 kHz tones after trauma (Figure 5E). Enhanced neural detection of 8 kHz tones was largely driven by PyrNs in deafferented map regions, whereas the loss of cortical sensitivity to high-frequency tones was observed in both topographic zones (Figure 5F).

Tracking changes in activity and local synchrony in individual PyrNs over several weeks

A principal advantage of chronic two-photon calcium imaging lies in the ability to perform longitudinal assessments of activity changes in individual cells over relatively long time periods, essentially enabling individual cells to serve as their own control when evaluating changes after noise exposure. To track individual PyrNs over a several week period, imaging fields were first registered to a pre-exposure imaging session, and cell tracking was performed using probabilistic modeling (Sheintuch et al., 2017; Figure 6A, B, see Materials and methods). Some PyrNs could be tracked throughout all 15 imaging sessions and others just for a single session, where the overall number of neurons depended on how liberal or conservative the threshold was set for chronic tracking confidence (Figure 6—figure supplement 1B). We set the criterion for identifying chronically tracked cells by creating a control scenario in which a single imaging field from eight different mice were concatenated, allowing us to quantify the occurrence of falsely identified tracked cells across sessions. We observed a clear separation in the occurrence of veridically and falsely identified cells across sessions, where falsely tracked PyrNs over eight or more sessions with a confidence threshold of 0.8 were never observed, prompting us to use this criterion for the remainder of our analyses.

Figure 6 with 1 supplement see all
Tracking single neuron response gain dynamics over a several week period before and after noise exposure.

(A) Example fields-of-view from a single mouse showing the same imaging field over a several week period. Insets show data acquired at ×4 digital zoom. Scale bar is 200 μm, inset scale bar is 20 μm. (B) Single L2/3 PyrN ROI masks. Green masks indicate cells found on the current day and all previous days using a cell score threshold of 0.8 (see Figure 6—figure supplement 1). (C) Normalized 8 kHz intensity-response functions for the four PyrNs highlighted in B. Neurons in the intact region show temporary increases in their responses while neurons in the deafferented region show permanent hyperresponsiveness. (D) Mean fold change in response to 8 kHz tones of varying intensities for individual neurons relative to their own response function measured prior to noise exposure (n = 303/552, trauma/sham). Gain is strongly elevated in both regions hours after trauma. Sustained gain increases are observed in the deafferented zone for at least 1 week following trauma but not in the intact zone. Four-way analysis of variance (ANOVA) with Group and Region as factors, and Time and Intensity as repeated measures (main effects, respectively: F = 6.87, p = 0.01; F = 2.9, p = 0.09; F = 8.69, p = 4 × 10–5; F = 116.61, p = 8 × 10–13; Group × Region × Time × Intensity interaction term: F = 6.65; p = 0.0004).

Interestingly, we noted that new active PyrNs appeared immediately following noise exposure, while PyrNs tracked throughout the baseline imaging sessions disappeared (Figure 6—figure supplement 1C). Because the degree of turnover was far less after sham exposure, because the appearance or disappearance of PyrNs after acoustic trauma was concentrated near the deafferentation boundary (Figure 6—figure supplement 1D), and because approximately 75% of PyrN disappearance occurred within the 48 hr after acoustic trauma (Figure 6—figure supplement 1E), we concluded that increased PyrN turnover in the imaging field must be related to cortical changes arising indirectly from the cochlear SNHL. Possibilities include large and heterogenous state shifts in activity (from virtually quiescent to active or vice versa) as well as changes in the physical topology of the cortex due to degradation of extracellular matrix proteins and increased neurite motility after sudden hearing loss (Nguyen et al., 2017; Tschida and Mooney, 2012).

Chronic cell tracking allowed us to revisit analyses of central gain at the level of individual PyrNs, rather than across PyrN populations in intact or deafferented map regions. At this higher level of spatial specificity, we noted PyrNs exhibiting no increase or a temporary increase in response to an 8 kHz tone from cells in the intact zone and a permanent increase in responsiveness from cells in the deafferented zone (Figure 6C). More specifically, we noted transient hyper-responsiveness from 20 to 80 dB SPL for all neurons across A1 (Figure 6D) followed by a second stage of 8 kHz hyper-responsivity restricted to intensities above 50 dB SPL in the deafferented zone (Figure 6D). These observations can account for performance changes in the population-level decoder (Figure 5E) and more generally confirm that the temporal and spatial patterns noted across populations of PyrNs are supported by commensurate plasticity of single PyrN responses.

Prior studies have noted increased spontaneous activity in acute single unit or multiunit recordings in the days following acoustic trauma, though it has not been clear whether that is driven by the unmasking of hitherto silent neurons (Figure 6—figure supplement 1C) or increased activity rates of individual neurons over time (Kotak et al., 2005; Noreña et al., 2010; Noreña et al., 2003; Seki and Eggermont, 2003). Normalizing by the pre-exposure period, we noted an approximate 20% increase in PyrN spontaneous activity across A1 after acoustic trauma, but relatively modest effects following sham exposure (Figure 7A, B). Unlike the increase in edge frequency tone-responsiveness, increased spontaneous activity after acoustic trauma was not topographically restricted and continued to increase steadily with time after acoustic trauma (Figure 7B, C).

Topographic regulation of neural hyperexcitability and hyper-synchrony after acoustic trauma.

(A) Spontaneous activity traces in four example neurons from a trauma (left) and sham (right) mouse. (B) In chronically tracked PyrNs, spontaneous activity changes are expressed as fold change relative to that cell’s pre-exposure baseline. Increased spontaneous activity after trauma (right) or the lack thereof after sham exposure (left) are plotted over topographic distance and over post-exposure time. (C) Spontaneous activity changes across the cortical map are significantly greater after trauma than sham exposure and increase over post-exposure time (n = 915/1125 tracked cells, for trauma/sham; mixed model analysis of variance (ANOVA) with Group as a factor and Time as a repeated measure, main effect for Group [F = 12.81, p = 0.0004], main effect for Time [F = 65.03, p = 3 × 10–40], Group × Time interaction term [F = 17.66, P = 3 × 10–11]). (D) Synchrony in the spontaneous activity of PyrN pairs is measured as the area under the shuffle-corrected cross-correlogram peak (shaded red region, see Materials and methods). Example data are plotted for the same four PyrNs with topographic positions indicated in left panel. (E) Looking across all significantly correlated PyrN pairs recorded in a given imaging session (n = 3,301,363 pairs, 1,624,195/1,677,168 for trauma/sham), neural synchrony is reduced as the physical separation between somatic ROIs increases. Synchrony is increased after trauma, though remains elevated only among nearby PyrNs (three-way ANOVA with Group, Day, and Distance as factors: main effects for Group [F = 556.94, p = 4 × 10–123], Day [F = 82.6, p = 2 × 10–53], and Distance [F = 8527.73, p = 0], Group × Day × Distance interaction term [F = 7.94, p = 3 × 10–5]). (F) For each chronically tracked neuron (same sample as C), we calculate their average neural synchrony with all other cells (only taking significant pairs). Given the location of these tracked cells, we can examine the fold change in neural synchrony relative to pre-exposure baseline across the topographic map. Neural synchrony is significantly and stably increased after trauma, particularly for PyrNs located near the deafferentation boundary (mixed model ANOVA with Group and Distance as factors and Day as a repeated measure: main effects for Group [F = 26.62, p = 3 × 10–7], Day [F = 1.68, p = 0.19], and Distance [F = 0.53, p = 0.47], Group × Distance interaction term [F = 5.53, p = 0.02]).

Tinnitus and auditory hypersensitivity are thought to arise from a combination of spontaneous hyperactivity as well as increased synchrony between cells (Auerbach et al., 2019, Auerbach et al., 2014; Resnik and Polley, 2021; Seki and Eggermont, 2003; Shore and Wu, 2019). To determine how increased synchrony developed over topographic space and post-trauma time, we cross-correlated periods of spontaneous activity between pairs of A1 PyrNs and removed the influence of gross changes in activity rates through shuffle correction (Figure 7D, see Materials and methods). Using the area under the shuffle-corrected cross-correlogram as our index of pairwise synchrony (Figure 7D), we noted that synchronized activity normally decayed to asymptote once PyrNs were separated by more than 0.25 mm (Figure 7E, top). In the first 72 hr following acoustic trauma, pairwise synchrony was significantly enhanced for PyrNs separated by as much as 0.7 mm (Figure 7E, bottom). Three days after trauma and beyond, local synchrony remained strongly elevated for PyrNs relative both to baseline and to sham-exposed controls. Increased synchrony following trauma was primarily observed in PyrN’s located close to – or straddling – the deafferentation boundary, where the peak of elevated synchrony was positioned within a tonotopic region corresponding to lesion edge sound frequencies (Figure 7F).

Predicting the degree of excess central gain after trauma in individual PyrNs based on their pre-exposure response features

Taken together, these observations also suggest that plasticity following acoustic trauma is organized into two phases: a dynamic phase during the first 48–72 hr after noise exposure involving topographically widespread hyper-correlation, hyper-responsivity to mid-frequency sounds in the intact map regions, and large-scale turnover in PyrN stability around the deafferentation map boundary, followed by a stable phase beginning 3 days after exposure where gain is increased for spared tone frequencies in deafferented map regions. However, even when tracking the same neuron over time during the stable phase of reorganization, we still noted considerable heterogeneity in the expression of central gain changes between individual PyrNs. For example, for two PyrNs located within the deafferented map region, one could show stable 8 kHz growth functions after trauma (Figure 8A) and the other excess gain (Figure 8B).

Figure 8 with 1 supplement see all
Identifying baseline features in single PyrNs that predict stable versus excess gain changes after acoustic trauma.

(A) Two exemplar tracked neurons illustrating stable (left) and excess (right) response growth to an 8 kHz tone following acoustic trauma. (B) Both neurons are located in the deafferented map region but had different best frequencies (BFs) and frequency tuning properties measured during the baseline imaging session. (C) Spontaneous activity for the same two PyrNs also differed at baseline. (D) For tracked neurons, gain is measured as the fold change in the area under the intensity-response growth function relative to the pre-exposure baseline (see Figure 8—figure supplement 1A). In eight representative neurons, a higher spontaneous activity rate at baseline was associated with excess central gain after trauma. Arrows denote PyrNs 1 and 2 shown in A–C. (E) A linear model used various pre-exposure properties of chronically tracked neurons to predict their change in gain (see Materials and methods). Models were fit separately for PyrNs recorded from trauma (n = 510 cells) and sham (n = 749 cells) mice. The response variable is defined as the area under the 8 kHz response curve after noise/sham exposure relative to this same area measurement in baseline. (F) For each model, individual predictor variables were shuffled and the models were refit. The resulting decrease in the adjusted R-squared is shown for variables in both models, and bars are color coded by the sign of the relationship of each predictor variable with the response variable. Errors are bootstrapped. For the full model see Figure 8—figure supplement 1B. Predictor variables in order: area under the baseline 8 kHz intensity-response growth function, monotonicity index for the 8 kHz intensity-response function defined as the response at the maximum intensity divided by the response at the best intensity, mean spontaneous activity, BF, an indicator variable for whether the cell is in the deafferented or intact region, and the strength of correlated activity between the PyrN and its neighbors. (G) A graphical summary of the linear model results schematize the baseline factors most strongly associated with stable (left) or excess (right) gain after trauma.

To account for unexplained variability in central gain changes, we asked whether response features measured in the pre-exposure baseline period could predict whether neurons would express homeostatic or non-homeostatic regulation of sound-evoked activity. Returning to the same two example neurons, we noted that the PyrN that maintained stable gain had a relatively low spontaneous activity rate and sharp frequency tuning prior to acoustic trauma, whereas the PyrN that would express excess gain had higher spontaneous activity and a relatively broad pure tone receptive field (Figure 8B, C). Expanding this analysis to additional example neurons over post-exposure time further suggests that PyrNs that would go on to express non-homeostatic auditory growth functions tended to feature higher spontaneous activity levels at baseline (Figure 8D).

We used a multiple linear regression model to better capture how baseline properties can explain heterogenous changes in neural response gain. Central gain was operationally defined as the change in the area under the 8 kHz growth function relative to the PyrN’s baseline growth function area (as per Figure 8A, B). Quantifying neural gain as fold change in the normalized 8 kHz growth function area recapitulated the same topographic and temporal dependence described above with intensity growth slope (Figure 8—figure supplement 1A). To avoid overfitting the regression model, we selected a single time point – 3–5 days after noise exposure – that corresponded to the stable phase of reorganization after acoustic trauma, while maximizing our sample size of chronically tracked PyrNs. As predictor variables, we selected various spontaneous and sound-evoked response baseline features along with features related to the physical position of the PyrN within the A1 tonotopic map (see Figure 8 legend).

We found that regressing the post-exposure change in neural gain on baseline PyrN’s response features could account for approximately 40% of the variance in central gain changes after acoustic trauma but just 13% in sham-exposed controls, where neural gain changes were small overall and far less systematic (Figure 8E). This is noteworthy because the model excluded features related to the degree of cochlear damage or reduced bottom-up sensory afferent drive, which are traditionally interpreted as the primary determinants of cortical central gain changes. To determine the weighting of each predictor variable to the overall model fit, we randomly shuffled each variable and refit the model to calculate the decrement in the adjusted R-squared. The results are provided for all univariate predictors (Figure 8F) and all predictors including interaction terms (Figure 8—figure supplement 1B). We observed that excess non-homeostatic gain regulation following acoustic trauma occurred with the highest probability in PyrNs exhibiting weak, monotonically increasing 8 kHz growth functions and higher spontaneous activity levels at baseline (Figure 8G). Further, PyrNs located in the high-frequency (deafferented) region but with a lower frequency BF further increased the likelihood of expressing excess neural gain after trauma (Figure 8G). Taken together, these findings show that certain idiosyncratic response features measured just prior to acoustic trauma can predict whether A1 L2/3 PyrNs will undergo stable, homeostatic regulation or excess, non-homeostatic changes to a stimulus positioned near the edge of the cochlear lesion.

Discussion

Here, we introduced a noise exposure protocol that damages OHCs and neural afferents in the high-frequency base of the cochlea, providing a mouse model for the common steeply sloping high-frequency hearing loss profile in humans that is often associated with tinnitus, loudness hypersensitivity, and poor multitalker speech intelligibility (Figure 1). We demonstrated that behavioral hypersensitivity to spared, mid-frequency tones was also observed for direct stimulation of thalamocortical projection neurons, identifying the ACtx as a potential locus of plasticity underlying auditory hypersensitivity (Figure 2). We tracked ensembles of excitatory PyrNs over several weeks and confirmed large-scale reorganization of ACtx tonotopic maps (Figure 3) and sound intensity coding (Figure 4) that recapitulated the auditory hypersensitivity documented behaviorally (Figure 5). Neural hyperresponsivity to spared mid-frequency sounds was accompanied by hyperactive and hyper-correlated spontaneous activity (Figure 7), where the degree of excess neural gain following acoustic trauma in individual neurons could be predicted from many of these response features measured during the pre-exposure baseline period (Figure 8). Collectively, these findings underscore the close association between excess cortical gain and disordered sound perception after cochlear sensorineural damage and identify activity signatures that predispose neurons to non-homeostatic hyperactivity following noise-induced hearing loss.

Underlying mechanisms

Auditory hypersensitivity is a common auditory perceptual complaint associated with SNHL. Excess central gain – an abnormally steep growth of neural response with sound intensity – is the hypothesized neural substrate of auditory hypersensitivity (Auerbach et al., 2019, Auerbach et al., 2014; Zeng, 2020; Zeng, 2013). Excess central gain is prominently expressed in the ACtx of animals with sensorineural cochlear damage (Asokan et al., 2018; Chambers et al., 2016a; Noreña et al., 2003; Parameshwarappa et al., 2022; Popelár et al., 1987; Qiu et al., 2000; Resnik and Polley, 2017, Resnik and Polley, 2021; Seki and Eggermont, 2003; Syka et al., 1994) by contrast to the auditory nerve, where sound-evoked neural responses are strongly reduced (Heinz et al., 2005; Heinz and Young, 2004; Wake et al., 1993). Reorganization is also observed in subcortical stations of sound processing in animals with sensorineural cochlear damage (Chambers et al., 2016b; Kamke et al., 2003; Niu et al., 2013; Schrode et al., 2018; Shaheen and Liberman, 2018), but only particular cell types (Cai et al., 2009) and – in studies that have made direct inter-regional comparisons – is less robust overall than neural gain changes at the level of ACtx (Chambers et al., 2016a; Qiu et al., 2000).

Our findings build upon this literature by demonstrating excess central gain in L2/3 excitatory PyrNs that resembled behavioral hypersensitivity to spared mid-frequency tones. Seminal work demonstrated that animals became behaviorally hypersensitive to electrical microstimulation of central auditory neurons following cochlear damage (Gerken, 1979). Here, we expanded on this observation by demonstrating that noise-exposed mice are behaviorally hypersensitive to direct activation of thalamocortical projection neurons, which further underscores the association between excess cortical gain and auditory hypersensitivity. Hypersensitivity to direct stimulation of the thalamocortical pathway may be underpinned by changes in ACtx gene expression that result in elevated mRNA levels for glutamate receptor subunits and reduced mRNA transcripts and membrane-bound protein expression for GABAA receptor units (Balaram et al., 2019; Sarro et al., 2008). These adjustments in ligand-gated receptors have been associated with commensurate elevations in spontaneous excitatory postsynaptic currents and reduced inhibitory postsynaptic currents in PyrNs (Kotak et al., 2005; Yang et al., 2011). Thus, disinhibition and hypersensitization are conceptualized as synergistic compensatory responses that are triggered by a sudden decline in peripheral neural input, rendering cortical PyrNs less sensitive to feedforward inhibition from local inhibitory neurons – from parvalbumin-expressing (PV) fast-spiking interneurons in particular – and hyperresponsive to sound (Masri et al., 2021; Resnik and Polley, 2021; Resnik and Polley, 2017).

One challenge to this purely bottom-up model for compensatory plasticity underlying excess central gain is that it cannot readily account for why neighboring neurons exhibit heterogenous changes in sound-evoked hyperresponsivity after peripheral damage. Here, we found that approximately 40% of the variability in excess central gain among individual PyrNs could be accounted for by their response properties measured in the days just prior to acoustic trauma. In particular, PyrNs with low spontaneous activity rates and non-monotonic encoding of sound intensity – features associated with stronger intracortical inhibition (Tan et al., 2007; Wu et al., 2006) – showed more stable gain control after trauma. Conversely, initially high spontaneous activity, hyper-correlation, and gradual monotonic intensity growth functions could be reflective of weaker intracortical inhibitory tone and were features that predicted non-homeostatic excess gain after acoustic trauma.

Several recent findings support the idea that variations in the strength of intracortical inhibition can function as a watershed for subsequent functional outcomes. In an earlier study, we applied ouabain to the cochlear round window to produce bilateral lesions of approximately 95% of cochlear afferent neurons. Near-complete elimination of cochlear afferent input was associated with functional deafness at the level of the ACtx in approximately half the animals but a remarkable recovery of sound responsiveness in the other half. Single unit recordings from the cohort of mice that recovered sound processing weeks after cochlear neural loss all featured a rapid decline in PV-mediated feedforward inhibition onto PyrNs during the first hours and days following the peripheral injury, while the mice that showed no functional recovery expressed a far slower decay in PV-mediated inhibition (Resnik and Polley, 2017). Another piece of evidence comes from ACtx single unit recordings in marmosets outfitted with unilateral cochlear implants. Single units with narrowly tuned non-monotonic acoustic frequency-response areas – presumably reflecting stronger local inhibition – were suppressed by spatially diffuse electrical stimulation of the auditory nerve. Conversely, units with broad, V-shaped acoustic selectivity encoded cochlear implant stimulation with high fidelity (Johnson et al., 2016). Collectively, these findings underscore that the effects of peripheral afferent perturbations on central circuits are not exclusively determined by bottom-up drive but instead are also brokered by variations in the balance of excitation and inhibition within the local circuit or by gene expression differences between individual cells. At a global level, these mitigating influences are shaped by developmental stage (Dorrn et al., 2010; Harris et al., 2005; Sun et al., 2010) and circadian programs (Basinou et al., 2017), but – at the level of individual neurons – may reflect latent differences in genetic subtypes of L2/3 PyrNs or purely stochastic variations in inhibitory tone between different microcircuit milieus.

Behavioral hypersensitivity: interpretations and technical limitations

We found evidence of behavioral hypersensitivity to sound following acoustic trauma as measured by a steeper relationship between increasing tone intensity and detection ability in a Go/NoGo task (Figure 2). Although previous indicators of behavioral hypersensitivity in animals have relied on more reflexive measures, such as the acoustic startle response (Hickox and Liberman, 2014; Rybalko et al., 2015; Rybalko et al., 2011; Sun et al., 2012), more recent work has utilized operant detection tasks including reaction times as an approximation of sound hypersensitivity (Auerbach et al., 2019). Here, we used an adaptive tracking approach to characterize the perceptual salience of liminal sound intensities spanning an approximate 30 dB range around threshold. To control for changes in global behavioral state due to acoustic trauma, our slope measurements are taken from functions using a sensitivity index (d′), which normalizes lick probability according to the false alarm rate determined from the delivery of catch (silent) trials. Thus, overall changes in stress, arousal, or other global behavioral states following acoustic trauma that impacted overall responsivity in catch and stimulus trials would be controlled for by the d′ measurement. However, as with reaction time, the d′ growth function is not a direct measure of loudness, per se, but instead is probably best likened to the change in stimulus salience for tones of varying intensity. Further, while the perceptual salience (Figure 2) and neural decoding of spared, 8 kHz tones (Figure 5) were both enhanced after high-frequency SNHL, these measurements were not performed in the same animals (and therefore not at the same time). Definitive proof that increased cortical gain is the neural substrate for auditory hypersensitivity after hearing loss would require concurrent monitoring and manipulations of cortical activity, which would be an important goal for future experiments.

While the findings presented here support an association between sensorineural peripheral injury, excess cortical gain, and behavioral hypersensitivity, they should not be interpreted as providing strong evidence for these factors in clinical conditions such as tinnitus or hyperacusis. Our data have nothing to say about tinnitus one way or the other, simply because we never studied a behavior that would indicate phantom sound perception. If anything, one might expect that mice experiencing a chronic phantom sound corresponding in frequency to the region of steeply sloping hearing loss would instead exhibit an increase in false alarms on high-frequency detection blocks after acoustic trauma, but this was not something we observed. Hyperacusis describes a spectrum of aversive auditory qualities including increased perceived loudness of moderate intensity sounds, a decrease in loudness tolerance, discomfort, pain, and even fear of sounds (Pienkowski et al., 2014a). The affective components of hyperacusis are more challenging to index in animals, particularly using head-fixed behaviors, though progress is being made with active avoidance paradigms in freely moving animals (Manohar et al., 2017). Our noise-induced high-frequency SNHL and Go-NoGo operant detection behavior were not designed to model hyperacusis. Hearing loss is not strongly associated with hyperacusis, where many individuals have normal hearing or have a pattern of mild hearing loss that does not correspond to the frequency dependence of their auditory sensitivity (Sheldrake et al., 2015). While the excess central gain and behavioral hypersensitivity we describe here may be related to the sensory component of hyperacusis, this connection is tentative because it was elicited by acoustic trauma and because the detection behavior provides a measure of stimulus salience, but not the perceptual quality of loudness, per se.

Cortical hyperreactivity: interpretations and technical limitations

Two-photon calcium imaging offers several key advantages for cortical plasticity studies including the ability to track single neurons over weeks (Figure 6) and genetic access to multiple cell types (Resnik and Polley, 2021). On the other hand, it can provide less insight into the mechanisms underlying destabilized excitatory-inhibitory balance than electrophysiological approaches (Resnik and Polley, 2017; Yang et al., 2011; Yang et al., 2012). Further, calcium indicators provide only an approximation of neural activity and can be limited in their kinetics and reliability to report precise cellular events (Grienberger and Konnerth, 2012), although correct deconvolution and post hoc analysis techniques can help to minimize issues introduced from calcium imaging (Sabatini, 2019).

Homeostatic Plasticity describes a negative feedback process that stabilizes neural activity levels following input perturbations. Homeostatic Plasticity mechanisms modify excitatory and inhibitory synapses over a period of hours or days to offset input perturbations and gradually restore spiking activity back to baseline levels (Turrigiano, 2012; Turrigiano, 2008). Importantly, cytosolic calcium is itself an upstream barometer of activity that regulates the molecular signaling cascades underlying AMPA receptor scaling or GABA receptor removal (Harris and Rubel, 2006; Turrigiano, 2012). This underscores a key advantage to using calcium imaging in experiments that monitor network activity following perturbations of peripheral input levels: that although GCaMP is a closely related but indirect measure of spiking, it arguably provides more direct insight into a key upstream signal driving Homeostatic Plasticity signaling cascades.

Although we did not measure Homeostatic Plasticity per se, via direct demonstrations of intrinsic or heterosynaptic electrophysiological changes, our measurements of spontaneous and sound-evoked calcium transients clearly demonstrate a failure of homeostatic regulation following acoustic trauma. Spontaneous and sound-evoked calcium levels both remained elevated above pre-exposure baseline levels or levels observed in sham-exposed control mice. Hyperactive and hyper-correlated activity in regions of the cortical topographic map corresponding to peripheral sensorineural damage is widely understood to be the underlying neural basis of phantom sound perception in tinnitus (Auerbach et al., 2019; Herrmann and Butler, 2021b; Noreña, 2011). Here, we show that these changes are also a likely underlying neural substrate for auditory hypersensitivity, a core component of hyperacusis that often accompanies tinnitus (Cederroth et al., 2020; Schecklmann et al., 2014). These findings identify a neurophysiological target for testing therapeutic strategies in animals and inform the selection of non-invasive biomarkers for the development of improved diagnostics and therapies in human populations (Polley and Schiller, 2022).

Materials and methods

Key resources table
Reagent type (species) or resourceDesignationSource or referenceIdentifiersAdditional information
Genetic reagent (Mus musculus)C57BL/6J-Tg(Thy1-GCaMP6s)GP4.12Dkim/JJackson LaboratoryJAX #025776Male
Genetic reagent (Mus musculus)CBA/CaJJackson LaboratoryJAX #000654Female
Genetic reagent (Mus musculus)C57BL/6JJackson LaboratoryJAX #000664Male/female
Recombinant DNA reagentAAVrg-pgk-CreAddgeneRRID:Addgene_24593
Recombinant DNA reagentAAV5-Ef1a-DIO hChR2(E123T/T159C)-EYFPAddgeneRRID:Addgene_35509
Antibodyms(1gG1) α CtBP2 (mouse monoclonal)BD Transduction LabsBDB6120441:200
Antibodyrb α MyosinVIIa (rabbit polyclonal)Proteus Biosciences25–67901:200
Antibodyms(1gG2a) α GluA2 (mouse monoclonal)MilliporeMAB3971:2000
Antibodyrb α Epsin (rabbit polyclonal)SigmaHPA0286741:100
Antibodygt α ms (IgG2a) AF 488 (goat polyclonal)Thermo FisherA-211311:1000
Antibodygt α ms (IgG1) AF 568 (goat polyclonal)Thermo FisherA-211241:1000
Antibodydk α rb AF 647 (donkey polyclonal)Thermo FisherA-315731:200
Antibodygt α rb PacBlue (goat polyclonal)Thermo FisherP109941:200
Chemical compound, drugLidocaine hydrochlorideHospira IncCat #71-157-DK
Chemical compound, drugBuprenorphine hydrochlorideBuprenexCat #NDC 12496-0757-5
Chemical compound, drugIsofluranePiramalCat #NDC 66794-013-10
Chemical compound, drugSilicon adhesiveWPICat #KWIK-SIL
Chemical compound, drugC&B Metabond Quick Adhesive Cement SystemParkwellCat #S380
Software, algorithmLabviewNational Instrumentshttps://www.ni.com/en-us/shop/labview.htmlVersion 2015
Software, algorithmThorImage 3.0Thorlabshttps://www.thorlabs.com/ newgrouppage9.cfm?objectgroup
Software, algorithmSuite2PGithubhttps://github.com/cortex-lab/Suite2P; The Cortical Processing Laboratory at UCL, 2019Pachitariu et al., 2016
Software, algorithmCellRegGithubhttps://github.com/zivlab/CellReg; zivlab, 2022Sheintuch et al., 2017
Software, algorithmMATLABMathworkshttps://www.mathworks.com/ products/matlab.htmlVersion 2017b
OtherSolenoid driverEaton-Peabody Labshttps://github.com/EPL-Engineering/epl_valve; EPL-Engineering, 2019bSee Methods, ‘Go/NoGo tone detection task’
OtherLickometerEaton-Peabody Labshttps://github.com/EPL-Engineering/epl_lickometer; EPL-Engineering, 2019aSee Methods, ‘Go/NoGo tone detection task’
OtherPXI ControllerNational InstrumentsPXIe-8840See Methods, ‘Go/NoGo tone detection task’
OtherFree-field speakerParts Express275-010See Methods, ‘Go/NoGo tone detection task’
OtherTi-Sapphire LaserSpectra PhysicsMai Tai HP DeepSeeSee Methods, ‘Widefield and two-photon calcium imaging’
Other×16/0.8 NA ObjectiveNikonCFI75 LWD 16X WSee Methods, ‘Widefield and two-photon calcium imaging’
OtherTwo-photon microscopeThorlabsBergamo IISee Methods, ‘Widefield and two-photon calcium imaging’
OtherTitanium headplateiMaterialiseCustomSee Methods, ‘Survival surgeries for awake, head-fixed experiments’
OtherDiode laser (488 nm)OmnicronLuxX 488-100See Methods ‘Go/NoGo optogenetic detection task’

Experimental model and subject details

Request a detailed protocol

All procedures were approved by the Massachusetts Eye and Ear Animal Care and Use Committee and followed the guidelines established by the National Institutes of Health for the care and use of laboratory animals. Imaging and tone Go/NoGo behavior were performed on Thy1-GCaMP6s × CBA mice. Combined acoustic and optogenetic Go/NoGo behavioral studies were performed in C57BL/6J mice. Mice of both sexes were used for this study. Noise exposure occurred at 9 weeks postnatal and was timed to occur in the morning, when the temporary component of the threshold shift is less extreme and variable (Meltser et al., 2014). Mice were maintained on a 12 hr light/12 hr dark cycle. Mice were provided with ad libitum access to food and water unless they were on-study for behavioral testing, in which case they had restricted access to water in the home cage.

Data were collected from 44 mice. A total of 22 mice contributed data to behavioral tasks: 13 (N = 7/6, trauma/sham) to the tone Go/NoGo behavior and 9 (N = 6/3, trauma/sham) to the combined acoustic and optogenetic Go/NoGo behavior. A total of 10 mice contributed to the chronic imaging experiments: two mice were used for widefield imaging (N = 1/1, trauma/sham) and 8 (N = 4/4, trauma/sham) for the two-photon imaging. Twelve mice were only used for regular ABR testing after acoustic trauma to determine the progression of threshold shift. Cochlear histology was performed on 11 of the mice used for Go/NoGo behavioral testing (N = 7/4, trauma/sham). The timing of all procedures performed on each mouse is provided in Supplementary file 1.

Method details

Survival surgeries for awake, head-fixed experiments

Request a detailed protocol

Mice were anesthetized with isoflourane in oxygen (5% induction, 1.5–2% maintenance). A homeothermic blanket system was used to maintain body temperature at 36.6 (FHC). Lidocaine hydrochloride was administered subcutaneously to numb the scalp. The dorsal surface of the scalp was retracted and the periosteum was removed. The skull surface was prepped with etchant and 70% ethanol before affixing a titanium headplate to the dorsal surface with dental cement. At the conclusion of the headplate attachment and any additional procedures listed below, Buprenex (0.05 mg/kg) and meloxicam (0.1 mg/kg) were administered, and the animal was transferred to a warmed recovery chamber.

High-frequency noise exposure

Request a detailed protocol

To induce acoustic trauma, octave-band noise at 16–32 kHz was presented at 103 dB SPL for 2 hr. Exposure stimulus was delivered via a tweeter fixated inside a custom-made exposure chamber (51 × 51 × 51 cm). The interior walls of the acoustic enclosure joined at irregular, non-right angles to minimize standing waves. Additionally, to further diffuse the high-frequency sound field, irregular surface depths were achieved on three of the interior walls by attaching stackable ABS plastic blocks (LEGO). Prior to exposure, mice were placed, unrestrained, in an independent wire-mesh chamber (15 × 15 × 10 cm). This chamber was placed at the center of a continuously rotating plate, ensuring mice were exposed to a relatively uniform sound field. Sham-exposed mice underwent the same procedure except that the exposure noise was presented at an innocuous level (40 dB SPL). All sham and noise exposures were performed at the same time of day.

Go/NoGo tone detection task

Request a detailed protocol

Three days after headplate surgery, animals were weighed and placed on a water restriction schedule (1 ml/day). During behavioral training, animals were weighed daily to ensure they remained between 80% and 85% of their initial weight and regularly examined for signs of excess dehydration. Mice were given supplemental water if they received less than 1 ml during a training session or appeared excessively dehydrated. During testing, mice were head-fixed in a dimly lit, single-walled sound attenuating booth (ETS-Lindgren), with their bodies resting in an electrically conductive cradle. Tongue contact on the lickspout closed an electrical circuit that was digitized (at 40 Hz) and encoded to calculate lick timing. Digital and analog signals controlling sound delivery and water reward were controlled by a PXI system with custom software programmed in LabVIEW. Free-field stimuli were delivered via an inverted dome tweeter positioned 10 cm from the left ear and calibrated with a wide-band ultrasonic acoustic sensor (Knowles Acoustics).

Most mice required 2 weeks of behavioral shaping before they could perform the complete tone detection task with psychophysical staircasing. After mice were habituated to head-fixation, they were conditioned to lick the spout within 2 s following the onset of an 8 or 32 kHz 70 dB SPL tone (0.25 s duration, with 5 ms raised cosine onset–offset ramps) to receive a small quantity of water (4 μl). Trials had a variable intertrial interval (4–10 s) randomly selected from a truncated exponential distribution. Once reaction times were consistently <1 s, mice were trained to detect 8 and 32 kHz tones in a 2-down, 1-up adaptive staircasing paradigm, where two correct detections were required to decrease the range of sound intensities by 5 dB SPL and one miss was required to increase the range of sound intensities by 5 dB SPL. At each iteration of the adaptive staircasing procedure, three trials were presented: a catch (silent) trial and tones at ±5 dB SPL relative to the last intensity tested (Figure 1I). A single frequency was presented until 1 reversal was reached, and then the other tone was presented; a run was completed once six reversals had been reached for both frequencies. The first frequency presented on each daily session was randomized.

Hits were defined as post-target licks that occurred >0.1 and <1.5 s following the onset of the target tone. False alarms (Go responses on a catch trial) triggered a 5-s time out. Entire runs were excluded from analysis if the false alarm rate was greater than 30%. This exclusion criterion resulted in the elimination of <5% of test runs across all conditions (before, after, noise- or sham-exposure), underscoring that mice were under stimulus control even if their hearing thresholds were elevated. Psychometric functions were fit using binary logistic regression. Threshold was defined as the average intensity at reversals across an entire session.

Go/NoGo optogenetic detection task

Request a detailed protocol

Headplate attachment, anesthesia and analgesia followed the procedure described above. Three burr holes were made in the skull over auditory cortex (1.75–2.25 mm rostral to the lambdoid suture). We first expressed Cre-recombinase in neurons that project to the ACtx by injecting 150 nl of AAVrg-pgk-Cre 0.5 mm below the pial surface at three locations within the ACtx with a precision injection system (Nanoject III) coupled to a stereotaxic positioner (Kopf). A fourth injection was then performed to selectively express channelrhodopsin in auditory thalamocortical projection neurons by injecting 100 nl of AAV5-Ef1a-DIO hChR2(E123T/T159C)-EYFP in the MGBv (−2.95 mm caudal to bregma, 2.16 mm lateral to midline, 3.05 mm below the pial surface). An optic fiber (flat tip, 0.2 mm diameter, Thorlabs) was inserted at the MGB injection coordinates to a depth of 2.9 mm below the pial surface. The fiber assembly was cemented in place and painted with black nail polish to prevent light leakage. Animals recovered for at least 3 days before water restriction and behavioral testing began.

After mice were habituated to head-fixation, they were conditioned to lick the spout within 2 s following the onset of 70 dB SPL high-frequency bandpass noise (centered at 32 kHz, width 1 octave). Once consistent, mice were trained to detect optogenetic activation of thalamocortical neurons. The laser was pulsed at 10 Hz, 10 ms pulse width for 500 ms, and the bandpass noise was pulsed at 10 Hz, 20 ms pulse width (5 ms raised cosine onset–offset ramps) for 500 ms. For testing, randomized interleaved blocks of either noise or laser stimulation were presented at a fixed range of levels. The range of sound levels and laser powers was individually tailored prior to noise/sham exposure to ensure equivalent sampling of sound and laser perceptual growth functions, and then these fixed values were used for all post-exposure testing sessions. Tailoring was accomplished by first identifying the lowest laser power and 32 kHz sound level that produced at least 95% hit rates (operationally defined as ‘max’). These sound levels and laser powers were then presented alongside four attenuated levels relative to the maximum as well as no-stimulus catch trials in each mouse on every session. Psychometric functions were fit using binary logistic regression, and threshold was defined as the point where detection crossed 71% correct, which is the closest approximation to the threshold point identified with the 2-up, 1-down staircasing procedure described above. Runs were rejected for further analysis if the false alarm rate of the mouse was above 30%, and again this resulted in the exclusion of <5% of sessions.

Widefield and two-photon calcium imaging

Request a detailed protocol

Three round glass coverslips (two 3 mm and one 4 mm diameter, #1 thickness, Warner Instruments) were etched with piranha solution and bonded into a vertical stack using transparent, UV-cured adhesive (Norland Products, Warner Instruments). Headplate attachment, anesthesia and analgesia follow the procedure listed above. A circular craniotomy (3 mm diameter) was made over the right ACtx using a scalpel and the coverslip stack was cemented into the craniotomy. Animals recovered for at least 5 days before beginning imaging recordings. All imaging was performed in awake, passively listening animals.

A series of pilot widefield imaging experiments were performed to visualize changes in all fields of the ACtx over a longer, 30- to 60-day post-exposure time period (N = 8 noise-exposed and 7 sham-exposed mice). The data collection procedure for these pilot experiments followed the methods described in detail in our previous publication (Romero et al., 2019). Briefly, widefield epifluorescence images were acquired with a tandem-lens microscope (THT-microscope, SciMedia) configured with low-magnification, high-numerical aperture lenses (PLAN APO, Leica, ×2 and ×1 for the objective and condensing lenses, respectively). Blue illumination was provided by a light-emitting diode (465 nm, LEX2-LZ4, SciMedia). Green fluorescence passed through a filter cube and was captured at 20 Hz with a sCMOS camera (Zyla 4.2, Andor Technology).

Cellular imaging was performed with a two-photon imaging system in a light-tight sound-attenuating enclosure mounted on a floating table (Bergamo II Galvo-Resonant 8 kHz scanning microscope, Thorlabs). An initial lower resolution epifluorescence widefield imaging session was performed with a CCD camera to visualize the tonotopic gradients of the ACtx and identify the position of A1 (as shown in Figure 3C). Two-photon excitation was provided by a Mai-Tai eHP DS Ti:Sapphire-pulsed laser tuned to 940 nm (Spectra-Physics). Imaging was performed with a ×16/0.8 NA water-immersion objective (Nikon) from a 512 × 512 pixel field of view at 30 Hz with a Bergamo II Galvo-Resonant 8 kHz scanning microscope (Thorlabs). Scanning software (Thorlabs) was synchronized to the stimulus generation hardware (National Instruments) with digital pulse trains. The microscope was rotated by 50–60 degrees off the vertical axis to obtain images from the lateral aspect of the mouse cortex while the animal was maintained in an upright head position. Animals were monitored throughout the experiment to confirm all imaging was performed in the awake condition using modified cameras (PlayStation Eye, Sony) coupled to infrared light sources. Imaging was performed in layers L2/3, 175–225 mm below the pial surface. Fluorescence images were captured at ×1 digital zoom, providing an imaging field of 0.84 × 0.84 mm.

Raw calcium movies were processed using Suite2P, a publicly available two-photon calcium imaging analysis pipeline (Pachitariu et al., 2016). Briefly, movies are registered to account for brain motion. Regions of interest are established by clustering neighboring pixels with similar time courses. Manual curation is then performed to eliminate low quality or non-somatic regions of interest. Spike deconvolution was also performed in Suite2P, using the default method based on the OASIS algorithm (Friedrich et al., 2017). For chronic tracking of individual cells across imaging sessions, cross-day image registration was performed using a method outlined by Sheintuch et al., 2017. Briefly, fields-of-view are aligned to a reference imaging session using a non-rigid transformation, and a probabilistic modeling approach is used to estimate whether neighboring cells from separate sessions are the same or different cells. To estimate the false positive rate with this approach, we also performed a control in which cross-day registration was performed with daily imaging fields randomly selected from different mice (Figure 6—figure supplement 1). For all analysis of tracked neurons, only cells with a confidence score of at least 0.8 (max of 1) and that were tracked for at least 8 of the 15 imaging sessions were used for the analysis. In analyses identifying cells that were either ‘lost’ or ‘appeared’ after noise/sham exposure (Figure 6—figure supplement 1), ‘appeared’ cells were defined as not being found in any baseline imaging sessions and also found in at least eight imaging sessions after noise exposure. Cells ‘lost’ at day X relative to noise exposure were consecutively tracked in all baseline sessions and days 0 through X, and not found in any session after day X. The same confidence threshold of 0.8 was also applied to analyses of ‘lost’ and ‘appeared’ cells.

During widefield imaging sessions, 20–70 dB SPL tones (in 10 dB steps) were presented from 4 to 64 kHz in 0.5 octave steps. On the first and last two-photon imaging sessions and on the day of noise exposure, 20–80 dB SPL tones (15 dB steps) were presented from 4 to 45.3 kHz (0.5 octave steps). For all other two-photon imaging sessions, 20–80 dB SPL tones (in 10 dB steps) were presented at 5.7, 8, 11.3, and 32 kHz. Each day, all stimuli were repeated 20 times. One block consisted of all frequency–intensity combinations, and stimuli were randomized within blocks. Tones were 50 ms with 5 ms raised cosine onset–offset ramps with 3 s inter-trial intervals.

Cochlear function tests

Request a detailed protocol

Animals were anesthetized with ketamine (120 mg/kg) and xylazine (12 mg/kg), were placed on a homeothermic heating blanket during testing, with half the initial ketamine dose given as a booster when required. Acoustic stimuli were presented via in-ear acoustic assemblies consisting of two miniature dynamic earphones (CUI CDMG15008-03A) and an electret condenser microphone (Knowles FG-23339-PO7) coupled to a probe tube. To highlight the peripheral neural contribution to the ABR, subdermal electrodes were positioned in a horizontal (pinna-to-pinna) montage (Galbraith et al., 2006; Melcher et al., 1996). Stimuli were calibrated in the ear canal in each mouse before recording. ABR stimuli were 5 ms tone pips at 8, 12, 16, or 32 kHz with a 0.5 ms rise-fall time delivered at 30 Hz. Intensity was incremented in 5 dB steps, from 20 to 100 dB SPL. ABR threshold was defined as the lowest stimulus level at which a repeatable waveform could be identified. DPOAEs were measured in the ear canal using primary tones with a frequency ratio of 1.2, with the level of the f2 primary set to be 10 dB less than f1 level, incremented together in 5 dB steps. The 2f1–f2 DPOAE amplitude and surrounding noise floor were extracted. DPOAE threshold was defined as the lowest of at least two continuous f2 levels, for which the DPOAE amplitude was at least two standard deviations greater than the noise floor. DPOAE and ABR testing was performed 1 week before noise- or sham-exposure, and again immediately following the conclusion of behavioral testing or imaging.

Cochlear histology

Request a detailed protocol

To visualize cochlear afferent synapses and IHCs and OHCs, cochleae were dissected and perfused through the round window and oval window with 4% paraformaldehyde in phosphate-buffered saline (PBS), then post-fixed in the same solution for 1 hr. Cochleae were then decalcified in 0.12 M ethylenediaminetetracetic acid (EDTA) for 2 days and dissected into half-turns for whole-mount processing. Immunostaining began with a blocking buffer (PBS with 5% normal goat or donkey serum and 0.2–1% Triton X-100) for 1 hr at room temperature. Whole mounts were then immunostained by incubating with a combination of the following primary antibodies: (1) rabbit anti-CtBP2 at 1:100, (2) rabbit anti-myosin VIIa at 1:200, and (3) mouse anti-GluR2 at 1:2000 and secondary antibodies coupled to the red, blue, and green channels. Immunostained cochlear pieces were measured, and a cochlear frequency map was computed (Müller et al., 2005) to associate structures to relevant frequency regions using a plug-in to ImageJ (Parthasarathy and Kujawa, 2018).

Images were collected at 2400 × 900 raster using a using a high-resolution, oil immersion objective (×63, numerical aperture 1.3), and ×1.25 zoom and assessed for signs of damage. Confocal z-stacks at identical frequencies were collected using a Leica TCS SP5 microscope to visualize hair cells and synaptic structures. Two adjacent stacks were obtained (78 µm cochlear length per stack) at each target frequency spanning the cuticular plate to the synaptic pole of ~10 hair cells (in 0.25 µm z-steps). Images were collected in a 1024 × 512 raster using a high-resolution, oil immersion objective (×63, numerical aperture 1.3), and digital zoom (×3.17). Images were loaded into an image-processing software platform (Amira; VISAGE Imaging), where IHCs were quantified based on their Myosin VIIa-stained cell bodies and CtBP2-stained nuclei. Presynaptic ribbons and postsynaptic glutamate receptor patches were counted using 3D representations of each confocal z-stack. Juxtaposed ribbons and receptor puncta constitute a synapse, and these synaptic associations were determined by calculating and displaying the xy projection of the voxel space within 1 µm of each ribbon’s center (Liberman et al., 2011). OHCs were counted based on the myosin VIIa staining of their cell bodies. The mean number of cells per row of OHCs was used as a measure of OHC counts.

For visualizing OHC stereocilia damage, following similar whole-mount dissection and blocking procedures, the other ear was immunostained with a combination of the following primary antibodies (1) rabbit anti-CtBP2 at 1:100, (2) mouse anti-GluR2 at 1:2000, and (3) rabbit anti-Espin at 1:100, followed by secondary antibodies in the red, green, and gray channels. Confocal z-stacks of the stereocilia were collected at 5.6, 11.3, 22, 32, 45, and 64 kHz cochlear frequencies with a Leica TCS SP8 microscope.

Brain histology

Request a detailed protocol

For mice performing the Go/NoGo optogenetic detection task, mice were deeply anesthetized and prepared for transcardial perfusion with a 4% formalin solution in 0.1 M phosphate buffer 21 days after noise exposure. The brains were extracted and post-fixed at room temperature for an additional 12 hr before transfer to 30% sucrose solution. Coronal sections (50 µm) were mounted onto glass slides using Vectashield with DAPI, and then coverslipped. Regions of interest were then imaged at ×10 using a Leica DM5500B fluorescent microscope.

Quantification and statistical analysis

Clinical database analysis

Request a detailed protocol

First-visit patient records from the Massachusetts Eye and Ear audiology database over a 24-year period from 1993 to 2016 were analyzed. Our analysis selected for adult patients aged 18 and 80, whose primary language was English, and who underwent pure tone audiogram tests in the left and right ears with octave spaced frequencies between 250 Hz and 8 kHz using headphones or inserts. To eliminate patients with conductive components in their hearing loss, the MEE dataset was further curated to remove all audiograms where the air-bone gap was ≥20  dB at any one frequency or ≥15  dB at two consecutive frequencies. Audiograms with thresholds ≥85  dB HL at frequencies ≤2000 Hz were also removed to maintain a conservative inclusion criterion, as the difference in limits of the air and bone conducting transducers limit our ability to determine the presence of conductive components in that threshold range. After this exclusionary step, we were left with 132,504 audiograms in the dataset for analysis. Of these audiograms, HFHL was defined as audiograms with thresholds lower than 20 dB HL for frequencies <1 kHz, between 10 and 80 dB HL for 2 kHz, between 20 and 120 dB HL for 4 kHz, and between 40 and 120 dB HL for 8 kHz, following the same criteria used to identify a steeply sloping high-frequency hearing loss in prior clinical database studies (Dubno et al., 2013; Parthasarathy et al., 2020b; Vaden et al., 2017). HFHL audiometric phenotypes consisted of 23% of the audiograms assessed based on these criteria. These patients were 65% male and had a median age of 65 years. The study was approved by the human subjects Institutional Review Board at Mass General Brigham and Massachusetts Eye and Ear. Data analysis was performed on deidentified data, in accordance with the relevant guidelines and regulations.

Behavioral analysis

Request a detailed protocol

To estimate perceptual gain in the Go/NoGo tone detection task, hit rates were taken at intensities ranging from the lowest intensity with sufficient trials (>5 trials) to the first intensity at which the hit rate was above 90% to account for saturation of the detection function. The perceptual gain was calculated as the average first derivative of the d′ function evaluated over the specified intensity range. The gain calculation was performed for each animal based on aggregated daily test sessions for a given post-exposure epoch (e.g., baseline, 0–2 days post-exposure).

Two-photon image analysis

Request a detailed protocol

All analysis was performed on the deconvolved calcium activity traces. For analysis of tone-evoked responses, averaged deconvolved calcium traces were expressed as Z-score units relative to activity levels measured during the pre-stimulus period (833 ms). PyrNs were operationally defined as being responsive to a particular tone frequency/level combination with a Z > 2. For the three imaging sessions that calculated the full frequency-response area, the minimum response threshold was defined for each PyrN as the lowest level at which there were responses to two adjacent frequencies (frequencies 0.5 octaves apart). Best frequency (BF) was defined as the frequency for which the overall response was maximal over the intensity range of threshold + 30 dB. Analysis of BF changes was limited to PyrNs with pure tone receptive fields (neural d′ > 1, as defined in Romero et al., 2019).

To delineate the intact and deafferented regions of the imaging field, an SVM was calculated for each mouse using BF’s determined from the first day of imaging prior to noise exposure. The SVM deafferentation boundary categorized physical space (intact: BF <16 kHz, deafferented: BF ≥16 kHz) and its physical location was then imposed on all successive imaging sessions after alignment of fields-of-view from all imaging sessions. To categorize the position of a neuron, the adjusted centroid locations were used based on the best registration from the previously mentioned method. The distance of a neuron to the SVM deafferentation boundary was calculated as the shortest Euclidean distance.

Gain was defined as the relationship between sound level (input) and activity rate (output). The gain was calculated as the average rate of change over a range of sound levels. The particular set of sound levels selected for gain analysis was determined according to whether the best level occurred at low, mid, or high sound levels as illustrated in Figure 5—figure supplement 1. For a neuron to be considered for an analysis of gain, it was required to have a significant response (Z > 2) to at least three consecutive intensities.

Spontaneous activity was calculated from the 833 ms periods preceding tone onset. To quantify the correlated activity between cells, we cross-correlated the Z-scored activity in the pre-stimulus periods. To control for the effects of overall changes in activity rates over sessions or between cells, shuffled cross-correlograms were generated for each pair by shuffling trial labels. Only cross-correlograms for which at least three consecutive lags had values significantly greater than the shuffled cross-correlogram (bootstrapped p < 0.05, Bonferroni corrected for multiple comparisons) were used for analysis. The degree of correlated activity between each pair was defined as the size of the positive area under the peak of the shuffle-subtracted cross-correlogram (xcorr area).

To determine how ensemble activity decoded tone presence, we used an SVM classifier with a linear kernel (following the approach of Resnik and Polley, 2021). The SVM was run on the principal components of a data matrix consisting of the Z-scored responses to single tone presentations or silent periods. PCA was used to reduce the influence of any inequities in sample sizes across mice or conditions. We ran the SVM on the minimum number of principal components required to explain 90% of the variance. Leave-one-out cross-validation was then used to train the classifier and compute the decoder accuracy. We repeated this process independently for each frequency intensity at 8 and 32 kHz for each imaging session. The models were fit using the ‘fitcsvm’ function in MATLAB.

To model how pre-exposure properties can predict the change in a neuron’s responsiveness, the outcome variable was the post/pre-exposure ratio of the areas under the intensity-response growth function, where post was drawn from days 3 to 5 after exposure. All predictor variables were computed as average values from the pre-exposure period. The best linear model was fit using stepwise multiple linear regression and using the Akaike information criterion. For the purposes of comparison, the predictor variables from the trauma model were applied to the sham model. The stepwise regression was fit using ‘stepwiselm’ and subsequent model fits used ‘fitlm’ in MATLAB.

Statistical analysis

Request a detailed protocol

All statistical analyses were performed in MATLAB 2017b (Mathworks). Data are reported as mean ± standard error of the mean unless otherwise indicated. Post hoc pairwise comparisons were adjusted for multiple comparisons using the Bonferroni correction.

Data availability

All figure code and data will be available on the Harvard Dataverse at the following: https://doi.org/10.7910/DVN/JLIKOZ.

The following data sets were generated
    1. McGill M
    (2022) Harvard Dataverse
    Neural signatures of auditory hypersensitivity following acoustic trauma.
    https://doi.org/10.7910/DVN/JLIKOZ

References

  1. Book
    1. Herrmann B
    2. Butler BE
    (2021a) Chapter 17-aging auditory cortex: the impact of reduced inhibition on function in
    In: Martin CR, Preedy VR, Rajendram R, editors. Assessments, Treatments and Modeling in Aging and Neurological Disease. Academic Press. pp. 183–192.
    https://doi.org/10.1016/B978-0-12-818000-6.00017-2

Decision letter

  1. Brice Bathellier
    Reviewing Editor; CNRS, France
  2. Barbara G Shinn-Cunningham
    Senior Editor; Carnegie Mellon University, United States
  3. Brice Bathellier
    Reviewer; CNRS, France
  4. Arnaud Norena
    Reviewer; CNRS, France
  5. Victoria M Bajo Lorenzana
    Reviewer; University of Oxford, United Kingdom

Our editorial process produces two outputs: i) public reviews designed to be posted alongside the preprint for the benefit of readers; ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.

Decision letter after peer review:

Thank you for submitting your article "Neural signatures of auditory hypersensitivity following acoustic trauma" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, including Brice Bathellier as Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Barbara Shinn-Cunningham as the Senior Editor. The following individuals involved in the review of your submission have agreed to reveal their identity: Arnaud Norena (Reviewer #2); Victoria M Bajo Lorenzana (Reviewer #3).

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential revisions:

1) Please either address the concerns of reviewers 2 and 3 about the behavioral readout of hyperacusis or tone down the conclusions to preclude over-interpreting indirect measurements of perception. In particular, not all animals experiencing noise trauma will experience hyperacusis. One possible way to improve the description of the hyperacusis condition would be to show a correlation between behavioral results and physiological measurement at the single animal level.

2) The referees have requested a large number of clarifications. Please carefully take this into account.

Reviewer #1 (Recommendations for the authors):

I have no particular comment on this manuscript which, to me, is carefully written, and includes carefully executed experiments and interesting results.

Reviewer #2 (Recommendations for the authors):

Specific comments

"As an exception, homeostatic regulation of neural activity often fails in the adult auditory system after hearing loss" – what do you mean?

"ii) a selective inability to perceptually suppress distracting sensory features,… "

I would remove this aspect as it is not really addressed in the paper and it is (in my opinion) different from tinnitus and hyperacusis: it is a more high-level phenomenon linked to informational masking.

"volume of electrical activity"

The use of "volume" is awkward. Why not use "surface"? I would use another term (level, sum, integral? etc.).

"These changes are often described as homeostatic plasticity, but in the context of adult plasticity after hearing loss they reflect a failure in homeostatic processes that maintain neural excitability at a stable set point. Whereas homeostatic changes – by definition – restore neural activity to a baseline activity rate following a perturbation (Turrigiano, 2012), neural gain adjustments following adult-onset hearing loss often over-shoot the mark, producing catastrophic downstream consequences at the level of network excitability and sound perception (Eggermont, 2017; Noreña, 2011)."

See my point in the Public Review. I am not sure we know whether the averaged central activity is really enhanced after hearing loss. What is enhanced is the spontaneous activity and the stimulus-evoked activity for supra-threshold signals, but overall the averaged activity should not be increased. The enhanced neural activity post-hearing loss just compensates for the reduction of stimulus-evoked activity due to hearing loss.

"A closer inspection of the mouse psychometric detection functions for the spared low-frequency tone suggested something akin to the clinical phenomenon of loudness recruitment." I am sorry it is not clear to me.

"Following acoustic trauma, the behavioral sensitivity index (d-prime, or d') for 8 kHz tones grew more steeply for sound intensities near thresholds compared both to pre-exposure baseline detection functions or sham exposure controls (Figure 2A-B)."

The authors should be more explicit since I don't know where to look at them. I don't understand how the sensation level (dB SL) can be negative. Does negative dB SL mean below the threshold? What are the results for high frequencies?

"That intensity-detection slopes were increased for the low-frequency tone at near-threshold sound levels argues against a peripheral origin for this change and instead suggests increased gain in the central auditory pathway." Why? Please provide complete argumentation for this.

"Importantly, testing in interleaved trials revealed behaviorally hypersensitivity to direct stimulation of auditory thalamocortical neurons after acoustic trauma (Figure 2E), demonstrating significantly increased d' across optogenetic stimulation intensities compared to sham controls (Figure 2F)". I love that experiment, testing neural sensitivity independently from the cochlea and hearing loss. However, I don't see much effects on the psychometric function in Figure 2E.

"A marked increase in PyrNs tuned to lesion edge frequencies could contribute to the enhanced perceptual sensitivity to 8 kHz tones identified in behavioral experiments (Figure 2)." Where does this hypothesis come from? Usually, loudness hyperacusis is not frequency-specific and is often found in subjects with small, if any, hearing loss.

Could you explain how the gain is calculated? In Figure 5A left example, I see an absolute response of 10 (15-10) over 40 dB, 10/40=0.4? But the value that is indicated is 0.23. On the other hand, it seems to work on the right example: 65/40=1.62.

"Classification of single trial A1 ensemble responses supported the hypothesis that cortical discrimination of sound versus silence would be enhanced for low- and mid-intensity 8 kHz tones but reduced for 32 kHz tones after trauma (Figure 5E). Enhanced neural detection of 8 kHz tones was largely driven by PyrNs in deafferented map regions, whereas the loss of cortical sensitivity to high-frequency tones was observed in both topographic zones (Figure 5F)."

I don't understand the point here, and more precisely the link between this result and the putative loudness hyperacusis. According to human data, loudness hyperacusis should be found at 8kHz and 32 kHz for supra-threshold acoustic simulation.

Also, it seems that neural activity is unchanged after noise trauma in the "intact region" when neural activity is assessed by calcium imaging, while the activity has been found to be largely enhanced in that region when neural activity is assessed with electrophysiology (MUA and LFP). That should be mentioned in the discussion. I am not a specialist in calcium imaging but the authors may explain somewhere (briefly) the relationship between calcium responses and MUA and LFP. Is there any correlation between calcium responses (the max) and MUA response and LF amplitude?

Could the authors explain how they assess spontaneous activity from calcium imaging? What is the shape of the signal that is recorded? Is this signal similar to spontaneous LFP? Spiking activity? This is important when authors assess the synchrony between cells, and then understand what is synchronized. If the activity is globally increased we expect a "mechanic" increase in synchrony that is difficult to correct. With spiking activity, it is different (it is easier to correct, except for onset and offset responses) since the time duration of a spike is very short compared to the inter-spike interval.

Discussion

I suggest being much more cautious in the interpretation of homeostatic plasticity and behavioral data.

"Furthermore, to control for any changes in licking behavior due to acoustic trauma, our slope measurements are taken from functions using a sensitivity index (d'), which normalizes lick probability according to the false alarm rate determined from the delivery of catch (silent) trials. Thus, a steeper relationship between increasing tone intensity and perceptual sensitivity strongly suggests a hypersensitivity to sound following acoustic trauma (Figure 2)."

I am not convinced by this conclusion. At the threshold, enhanced arousal (due to stress after noise trauma) may increase d'. I am not saying stress is necessarily enhanced in exposed animals but it is just to mention that the link between psychometric function for thresholds and loudness is not as straightforward as the authors seem to believe it is.

The discussion should come back a bit more to the literature and emphasize in more detail what is consistent with the literature, what is not, what is new, etc.

I suggest publishing the behavioral data elsewhere, and to focus here on the neural responses before and after the noise trauma, emphasizing clearly what is new, etc.

Reviewer #3 (Recommendations for the authors):

Specific concerns:

Introduction

The communalities between tinnitus, hyperacusis and speech in noise impairment are unknown, and the association between those and hearing loss are known but not what happens exactly at neural level. It seems extreme to put together normal aging with ADHD, autism, or schizophrenia, and claim that tinnitus, hyperacusis, and speech in noise impairment can be involved in all those conditions.

Line 76: we don't know yet whether neural gain adjustments following adult-onset hearing loss overshoot the mark. In fact, Koops, Renken, Lanting and van Dijk claimed that cortical tonotopic map changes in humans are larger in hearing loss than in additional tinnitus. J Neurosci. 2020 Apr 15; 40(16): 3178-3185. doi: 10.1523/JNEUROSCI.2083-19.2020

Please stay consistent with the use of trauma, acoustic trauma, sound overexposure, and high-frequency sound exposure throughout the manuscript.

Results

A clarification about the percentage of animals that develop loudness hypersensitivity after sound overexposure is necessary. As previously mentioned hyperacusis and tinnitus are not happening to all animal models of sensorineural hearing loss overexposed to high-frequency sounds. How the authors distinguish between animals with sensorineural hearing loss and animals with associated tinnitus and/or hyperacusis?

Perceptual hypersensitivity following noise-induced high-frequency sensorineural hearing loss.

Please state more precisely the differences between hyperacusis and hypersensitivity to loud sounds.

Explain why only wave 1 was used to explore the auditory thresholds in ABRs when wave 5, for example, would be easy to identify.

It is not clear how far the animals were tested and the survival time of the animals between the hearing loss triggering and the histology showing loss of synaptic contacts in IHCs in the base of the cochlea and death of OHC in the base with stereocilia changes in base and mid coils. In the different figures, there are different days and groups of days depending on the figure. Please clarify.

Figure 1 panels J, K. Change the colours to make them more different. Why there is a threshold elevation at 8 kHz in the first two days?

Behavioural hypersensitivity to cochlear lesion edge frequencies and direct auditory thalamocortical activation after SNHL.

Would it be possible that the differential distribution of damage between the IHCs and OHCs (only base versus base and middle coil) could explain the changes in ABRs, DPOAE and behavioural thresholds?

Please clarify why in the combined optogenetic (ChR2) and auditory stimulation, only a high frequency centred at 32 KHz. Is the optogenetic stimulation affecting all frequency ranges in vMGN or only high frequency?

In Figure 2, panel E it seems that the psychometric curve in the baseline condition for the hearing loss animals is shifted to the right compared with the baseline in the sham condition. It would be easier to compare them if they are placed in the same panel.

Chronic imaging in A1 reveals tonotopic remapping and dynamic spatiotemporal adjustments in neural response gain after acoustic trauma.

The authors mentioned that tonotopic remapping was limited to the deafferented zone where the proportion of layer 2/3 pyramidal neurons preferentially tuned to lesion edge frequencies is more than doubled following acoustic trauma without any systematic change in response. Please specify the lack of systematic change.

The identification of the frequency edge is convincing in the widefield but less convincing at the neural level. How the deafferentation boundary was established?

Please specific the rational for the use of those four specific frequencies (32, 5.7, 8, 11.3)

Cortical hyperresponsivity and increased gain mirrors behavioural hypersensitivity to spared low-frequency inputs.

This section is not particularly convincing and figure 5 is difficult to follow. For example, in panel D it is impossible to distinguish the two different types if sound or silence. Only panel F is clear and easy to follow.

Tracking changes in activity and local synchrony in individual PyrNs over several weeks.

Explain the differences across neurons in the probability to be tracked just for one day or for 15 session. Please explore the possibility that the baseline properties of the neurons impact their tractability in chronic imaging.

The authors mention that following noise overexposure active pyramidal neurons disappeared from the deafferented area, with a permanent increase in response to 8 kHz tome in the deafferented area but not in the low-frequency area. This increase was first for all intensity levels and later only above 50 dB SPL. When this changes happen? After 3 days? When is the beginning of the 2nd stage? Could the authors distinguish between an acute and a chronic phase?

Predicting the degree of excess central gain after trauma in individual PyrNs based on their pre-exposure response features.

Please expand your explanation about potential heterogeneity. I am not certain whether the authors talk about two different groups of pyramidal neurons or a gradient between low to high spontaneous activity at baseline.

Again here the temporal limit is different and they use 3.5 days. Please explain the rationale of that and use always the same time points with justification.

I am curious to know what will be the result of those functions about central gain if other frequencies were used.

Discussion

Please state the differences between the hearing loss in the animal model by acute sound overexposure from the majority of ageing sensorineural hearing loss (presbyacusis) where the changes are more gradual.

Excess central gain is the hypothesized neural substrate for loudness recruitment and generalised hyperacusis, but can the authors think about other potential causes?

Line 353. Typo long spaces in the same line.

Why and what for pyramidal neurons in layers 2-3 with low spontaneous activity and greater responses to low-intensity tones show more stable gain control after trauma? I do not understand what "stochastic variations in inhibitory tone between different microcircuit milieus" means. Is it about the different types and functions of inhibitory interneurons in the cortex? Can neuromodulators such as ACh play a role too?

Can the different types of neurons be distinguished? Again, what happens when the animals only have hearing loss without tinnitus or hyperacusis? Should be possible that the accumulation of one or more than one symptom after hearing loss depends on the proportion of active cells on the baseline?

The authors mention the startle reflex as an indicator of hypersensitivity but usually, animals show a high degree of adaptation to it. Can the sensory component of the hypersensitivity show habituation or sensitization?

Line 439, again many spaces in the sentence.

Line 448. Check typos in references.

Lines 466-468 needs a reference. Again here, the authors are not identifying tinnitus or even hyperacusis or misophonia?

Methods

Give reasons about the time of 9 weeks post-natal when sound overexposure (9 weeks post-natal) and why was done in the morning that corresponds with the period of higher sleep pressure for the mice.

It is difficult to follow the different animals in the experimental design. Please use a table.

Please specify if the whole ventral division of MGB was targeted with the fibre optic or only the high-frequency region. Was the fibre optic present but inactive in MGB for the control experiments?

Line 571-3. The range of sound levels and laser powers were tailored to each mouse prior to noise exposure to ensure equivalent sampling of sound and laser perceptual growth functions. Please expand the explanation by giving further information about differences across cases.

Line 584-585. It is bizarre using a scalpel to do a craniotomy instead of a dental drill.

https://doi.org/10.7554/eLife.80015.sa1

Author response

Essential revisions:

1) Please either address the concerns of reviewers 2 and 3 about the behavioral readout of hyperacusis or tone down the conclusions to preclude over-interpreting indirect measurements of perception. In particular, not all animals experiencing noise trauma will experience hyperacusis. One possible way to improve the description of the hyperacusis condition would be to show a correlation between behavioral results and physiological measurement at the single animal level.

As detailed in the point-by-point rebuttal below, we have addressed the reviewers’ concerns regarding hyperacusis and limitations of the behavior. This was accomplished by scaling back claims that were not supported by the data and explicitly detailing limitations of the current study design, such as measurements of behavior and physiology in separate mice.

2) The referees have requested a large number of clarifications. Please carefully take this into account.

The manuscript has been extensively revised based on these clarification requests as detailed below.

For clarity, the figure changes are as follows:

– In response to Reviewer 3, we have recolored Figure 1J-K to make the lines more discernible.

– In response to Reviewers’ 2 and 3 concerns about the behavior, we have provided individual mouse data. This is shown in Figure 2B and Figure 2F.

– In response to Reviewers’ 2 and 3 comments, we have provided a more representative trauma mouse in Figure 2E.

– In response to Reviewer 3’s question about the probability of cell tracking over time, we have added panel Figure 6 —figure supplement 1E, which shows the progression of cell disappearance over time.

Reviewer #1 (Recommendations for the authors):

I have no particular comment on this manuscript which, to me, is carefully written, and includes carefully executed experiments and interesting results.

We thank Reviewer 1 for their careful review and positive evaluation

Reviewer #2 (Recommendations for the authors):

Specific comments

"As an exception, homeostatic regulation of neural activity often fails in the adult auditory system after hearing loss" – what do you mean?

We agree that this was not the clearest way to introduce the study. The first two sentences of the abstract have been rewritten such that this phrase no longer appears (Pg. 2, Lns 2-5).

"ii) a selective inability to perceptually suppress distracting sensory features,… "

I would remove this aspect as it is not really addressed in the paper and it is (in my opinion) different from tinnitus and hyperacusis: it is a more high-level phenomenon linked to informational masking.

We respect that this is a reasonable opinion. But it is also reasonable to postulate that these distinct perceptual manifestations can share some common underlying causes. For example, we have shown that difficulty perceiving auditory targets in background noise is linked to neural hypersynchrony that builds up just prior to target onset (Resnik and Polley, Neuron 2021). Since the beginning of the introduction is broad to draw readers in from diverse backgrounds, we elected to leave this statement.

"volume of electrical activity"

The use of "volume" is awkward. Why not use "surface"? I would use another term (level, sum, integral? etc.).

Thanks. We removed this sentence.

"These changes are often described as homeostatic plasticity, but in the context of adult plasticity after hearing loss they reflect a failure in homeostatic processes that maintain neural excitability at a stable set point. Whereas homeostatic changes – by definition – restore neural activity to a baseline activity rate following a perturbation (Turrigiano, 2012), neural gain adjustments following adult-onset hearing loss often over-shoot the mark, producing catastrophic downstream consequences at the level of network excitability and sound perception (Eggermont, 2017; Noreña, 2011)."

See my point in the Public Review. I am not sure we know whether the averaged central activity is really enhanced after hearing loss. What is enhanced is the spontaneous activity and the stimulus-evoked activity for supra-threshold signals, but overall the averaged activity should not be increased. The enhanced neural activity post-hearing loss just compensates for the reduction of stimulus-evoked activity due to hearing loss.

Yes, we addressed this point in the response to the public reviews. The underlying mechanisms and common use of the term “homeostatic plasticity” describes a negative feedback process to stabilize the firing rates of individual neurons at a set point. It is true that excess activity in the cortex may (or may not) be offset by equivalent depression of neural activity in the periphery and brainstem but this would be the product – not the driver – of homeostatic processes that work differently in different brain areas. Again, “should” is the operative word here because measurements of the averaged activity across millions of neurons distributed across dozen auditory brain regions in both hemispheres of the brain have never been made. Even among studies that have made simultaneous recordings from cortical and subcortical stations (e.g., Chambers..Polley et al., Neuron 2016) there was no demonstration that the averaged activity was maintained. But, as discussed in the response to the public reviews, the main point of disagreement with the reviewer is that we are describing activity regulation for individual cortical neurons – because the homeostatic mechanisms work at the level of individual cortical neurons – and really make no statement about the overall activity levels of the entire auditory pathway.

"A closer inspection of the mouse psychometric detection functions for the spared low-frequency tone suggested something akin to the clinical phenomenon of loudness recruitment." I am sorry it is not clear to me.

The reviewer is correct; that wording was unclear. We removed that sentence entirely. This section of the Results now reads (on Pg. 5, Lns 137-144),

“For example, we recently reported that human subjects with normal hearing thresholds but asymmetric degeneration of the left and right auditory nerve perceive tones of fixed physical intensity as louder in the ear with poor auditory nerve integrity, particularly for low intensities near sensation level (Jahn et al., 2022). To determine whether mice may be showing evidence of auditory hypersensitivity, we performed a closer inspection of the mouse psychometric detection functions for the spared 8 kHz tone. This analysis confirmed that, following acoustic trauma, the behavioral sensitivity index (d-prime, or d’) grew more steeply for sound intensities near threshold compared both to pre-exposure baseline detection functions and sham exposed controls (Figure 2A-B).”

"Following acoustic trauma, the behavioral sensitivity index (d-prime, or d') for 8 kHz tones grew more steeply for sound intensities near thresholds compared both to pre-exposure baseline detection functions or sham exposure controls (Figure 2A-B)."

The authors should be more explicit since I don't know where to look at them. I don't understand how the sensation level (dB SL) can be negative. Does negative dB SL mean below the threshold? What are the results for high frequencies?

The detection sensitivity (d’) is plotted as a function of sensation level for example mice in 2A, which illustrates the steeper growth in the acoustic trauma mouse. The increase in sensitivity with sound intensity is taken as the slope of the function. Panel 2B shows that the slopes became steeper after acoustic trauma but not after a sham exposure. We added a new panel to figure 2B to also show the slope increase for all individual mice.

The detection threshold was defined with a standard procedure as the average intensity at the reversal of the 2-down 1-up adaptive staircasing, which corresponds to an intensity that will be detected on approximately 71% of trials. So, yes, that means we are presenting sound intensities below the level that will be defined as the threshold/0 dB SL. As such, we can plot the d’ for intensities below 0 dB SL. The reviewer can see in Figure 2A that the d’ values from -10 to 0 dB SPL are above zero because mice occasionally detect tones at intensities below the average of the reversals. This is explained on Pg. 18 Lns 581-589.

“Once reaction times were consistently < 1 s, mice were trained to detect 8 kHz and 32 kHz tones in a 2-down, 1-up adaptive staircasing paradigm, where two correct detections were required to decrease the range of sound intensities by 5 dB SPL and one miss was required to increase the range of sound intensities by 5 dB SPL. At each iteration of the adaptive staircasing procedure, three trials were presented: a catch (silent) trial and tones at +/- 5 dB SPL relative to the last intensity tested (Figure 1I). A single frequency was presented until 1 reversal was reached, and then the other tone was presented; a run was completed once 6 reversals had been reached for both frequencies. The first frequency presented on each daily session was randomized.”

"That intensity-detection slopes were increased for the low-frequency tone at near-threshold sound levels argues against a peripheral origin for this change and instead suggests increased gain in the central auditory pathway." Why? Please provide complete argumentation for this.

We have revised the text on Pg. 6 Lns 148-154 to provide a more explicit rationale for this conclusion.

"Persons with sensorineural hearing loss commonly report loudness recruitment, a disproportionately steep growth of loudness with increasing sound level, that has been accounted for by altered basilar membrane mechanics after OHC damage (Oxenham and Bacon, 2003). However, loudness recruitment is most pronounced for frequencies within the range of hearing loss and at intensities well above sensation level (Buus and Florentine, 2002; Moore, 2004). Here, we observed steeper growth of auditory sensitivity for a spared frequency and at intensities close to sensation level, neither of which would be expected of a purely peripheral origin.”

"Importantly, testing in interleaved trials revealed behaviorally hypersensitivity to direct stimulation of auditory thalamocortical neurons after acoustic trauma (Figure 2E), demonstrating significantly increased d' across optogenetic stimulation intensities compared to sham controls (Figure 2F)". I love that experiment, testing neural sensitivity independently from the cochlea and hearing loss. However, I don't see much effects on the psychometric function in Figure 2E.

To address this point, we made three changes to Figure 2: 1) Figure 2E now shows a more representative example of the optogenetic detection functions in an acoustic trauma mouse; 2) we added arrows to Figure 2F to indicate the example sham and trauma from Figure 2E relative to the other mice in the sample; 3) We added a new subpanel to Figure 2F showing the d’ change across the post-exposure period for each individual mouse. In this revised Figure 2E, we can see that detection of thalamocortical stimulation remained unchanged for the sham-exposed mouse and was increased (indicated by an increase in the d’) for the noise-exposed mouse following acoustic trauma. In Figure 2F, we look at the mean change in the detection function evaluated across laser intensities, and we see that after noise exposure but not sham exposure there is an increase in d’, the sensitivity to thalamocortical stimulation.

"A marked increase in PyrNs tuned to lesion edge frequencies could contribute to the enhanced perceptual sensitivity to 8 kHz tones identified in behavioral experiments (Figure 2)." Where does this hypothesis come from? Usually, loudness hyperacusis is not frequency-specific and is often found in subjects with small, if any, hearing loss.

Part of the confusion is related to the terms “hyperacusis” and “loudness”. As addressed in the response to the public reviews, we have removed statements that our experiments model hyperacusis, per se. Further, we removed all claims that our behavioral evidence provides direct insight into the perception of loudness. Rather than “hyperacusis” and “loudness”, the revised manuscript interprets the findings in light of auditory hypersensitivity, which seems appropriate. To the reviewer’s question, we are making the point that if the 8 kHz tone is recruiting activity from a larger number of excitatory neurons, and the response magnitude from individual neurons have also increased, that these changes would plausibly related to behavioral hypersensitivity to the stimulus. In other words, the neural SNR for the stimulus would be higher, which could amount to greater perceptual salience. This seems like a reasonable and non-controversial possibility to raise.

Could you explain how the gain is calculated? In Figure 5A left example, I see an absolute response of 10 (15-10) over 40 dB, 10/40=0.4? But the value that is indicated is 0.23. On the other hand, it seems to work on the right example: 65/40=1.62.

Sure, Pg. 22 Lns 791-795 of the Methods state,

“Gain was defined as the relationship between sound level (input) and activity rate (output). The gain was calculated as the average rate of change over a range of sound levels. The particular set of sound levels selected for gain analysis was determined according to whether the best level occurred at low, mid, or high sound levels as illustrated in Figure 5 —figure supplement 1. “

For the example functions in Figure 5A, gain was evaluated from 40 to 80 dB SPL. For the left function, this is a response change of ~9.2 (a.u.) over 40 dB SPL, which would be a gain of 9.2/40 = 0.23.

"Classification of single trial A1 ensemble responses supported the hypothesis that cortical discrimination of sound versus silence would be enhanced for low- and mid-intensity 8 kHz tones but reduced for 32 kHz tones after trauma (Figure 5E). Enhanced neural detection of 8 kHz tones was largely driven by PyrNs in deafferented map regions, whereas the loss of cortical sensitivity to high-frequency tones was observed in both topographic zones (Figure 5F)."

I don't understand the point here, and more precisely the link between this result and the putative loudness hyperacusis. According to human data, loudness hyperacusis should be found at 8kHz and 32 kHz for supra-threshold acoustic simulation.

Part of the confusion stems from the fact that our experiments don’t attempt to model loudness hyperacusis, which, as the reviewer points out, is often observed in human subjects without hearing loss. This was not a major point we tried to emphasize in the original manuscript but the Reviewer is right to point out that there were a few instances where we drew parallels between our findings and hyperacusis. Those comparisons have been removed and, as discussed above, we only draw parallels between our findings and the more general phenomena of auditory hypersensitivity.

We think that addresses the reviewer’s confusion. The point we are trying to make is to show that after high-frequency sensorineural hearing loss, low-frequency tones recruit stronger activity in the deafferented high-frequency zone of the tonotopic map. Recruiting more neurons into processing low-frequency tones would theoretically enhance neural population sensitivity to that stimulus, in line with the behavioral changes that we report. The point of Figure 5 is that a decoding analysis of cortical ensemble activity on single trials of tone presentation indeed shows enhanced cortical detection of 8kHz tone presentations, similar to what we observed behaviorally. Across the same range of sound intensities, neural classification of high-frequency tones is reduced on account of the threshold shift. The main point is that a neural classifier trained only on a small number of cortical neurons shows a clear parallel to the behavioral finding of auditory hypersensitivity.

Also, it seems that neural activity is unchanged after noise trauma in the "intact region" when neural activity is assessed by calcium imaging, while the activity has been found to be largely enhanced in that region when neural activity is assessed with electrophysiology (MUA and LFP). That should be mentioned in the discussion. I am not a specialist in calcium imaging but the authors may explain somewhere (briefly) the relationship between calcium responses and MUA and LFP. Is there any correlation between calcium responses (the max) and MUA response and LF amplitude?

Sorry for the confusion. We reported significant changes in the low-frequency “intact” map region as well. Figure 5C shows increased neural gain shortly after noise exposure in this region. Yes, it subsides with additional time following acoustic trauma, but it is certainly prevalent, at least initially. Also, Figure 7F shows persistent increases in spontaneous activity and neural synchrony in the low-frequency area near the deafferentation boundary.

To the reviewer’s questions about how GCaMP fluorescence compares with multiunit activity and LFP, we are not aware of any direct comparison across these measurement modalities. In an earlier paper, we compared MUA, single unit, and LFP activity to a simulation of 2-photon calcium imaging data (see Figure 9 of Guo et al., J. Neurosci 2012) but it wasn’t reported in the context of hearing loss or central gain changes.

Could the authors explain how they assess spontaneous activity from calcium imaging? What is the shape of the signal that is recorded? Is this signal similar to spontaneous LFP? Spiking activity? This is important when authors assess the synchrony between cells, and then understand what is synchronized. If the activity is globally increased we expect a "mechanic" increase in synchrony that is difficult to correct. With spiking activity, it is different (it is easier to correct, except for onset and offset responses) since the time duration of a spike is very short compared to the inter-spike interval.

Figures 3D and 7A illustrate averaged evoked and single trial spontaneous calcium transients, respectively. Imaging data were processed with Suite2p, a publicly available software package that provides a complete pipeline for processing calcium-dependent fluorescence signals collected with two-photon microscopes (Pachitariu et al., 2016; Stringer and Pachitariu 2019) that is widely used by neuroscience laboratories. Briefly, fluorescence data was processed in four stages:

1) Frame Registration: Brain movement artifacts are removed through a phase correlation process that estimates the XY offset values that bring all frames of the calcium video into register.

2) Detecting Regions of Interest: Suite2p then identifies candidate cellular regions of interests (ROIs) using a generative model with three key terms: 2a) a model of ROI activity, 2b) a set of spatially localized basis functions to model a neuropil signal that varies more gradually across space, and 2c) Gaussian measurement noise. Fitting of this model to data involves repeatedly iterating stages of ROI detection, activity extraction, and subsequent pixel re-assignment.

3) Signal Extraction and Spike Deconvolution: Suite2p then extracts a single fluorescence signal for each ROI by modelling the uncorrected fluorescence as the sum of three terms: 1) a somatic signal due to an underlying spike train, 2) a neuropil trace scaled by an ROI-specific coefficient, and 3) Gaussian noise (Stringer and Pachitariu 2019). The uncorrected fluorescence is first extracted by averaging all signals within each ROI. The neuropil trace is then computed as the average signal within an annular ring surrounding each ROI. The neuropil component is different from those identified during ROI detection, which implicitly uses pixels inside ROIs, and are not scaled by a contamination factor. Neuropil scaling coefficients and somatic fluorescence are then simultaneously estimated using an unconstrained non-negative deconvolution, using exponential kernels.

4) Cellular Identification: With a fluorescence trace assigned to each identified ROI, the final stage in the Suite2p pipeline involves identifying the subset of ROIs that correspond to neural somata. Suite2p utilizes a semi-automated approach by first labelling ROIs as cells or noncells based on various activity-dependent statistics, before a final manual curation step.

The reviewer’s question is really focused on the last part of Step 3, in which the relatively sluggish time-course of the calcium signal is deconvolved to produce sharper transients that are closely linked to spiking events. The kernel used for deconvolution in Suite 2P is derived from experiments that performed simultaneous cell-attached recordings and GCaMP6s imaging. In essence, the deconvolution kernel is a Rosetta Stone that analytically corrects for the intrinsic sluggishness of the calcium indicator to provide something akin to a ground truth assessment of spike timing. To be fair, even these ultrasensitive indicators can be less reliable for single spike events, especially with larger imaging areas (i.e., fewer pixels per somata) but this would be a constant across groups (acoustic trauma vs sham), imaging time, and cortical location so it is not a confounding influence.

Discussion

I suggest being much more cautious in the interpretation of homeostatic plasticity and behavioral data.

"Furthermore, to control for any changes in licking behavior due to acoustic trauma, our slope measurements are taken from functions using a sensitivity index (d'), which normalizes lick probability according to the false alarm rate determined from the delivery of catch (silent) trials. Thus, a steeper relationship between increasing tone intensity and perceptual sensitivity strongly suggests a hypersensitivity to sound following acoustic trauma (Figure 2)."

I am not convinced by this conclusion. At the threshold, enhanced arousal (due to stress after noise trauma) may increase d'. I am not saying stress is necessarily enhanced in exposed animals but it is just to mention that the link between psychometric function for thresholds and loudness is not as straightforward as the authors seem to believe it is.

To address the reviewer’s request, we removed the second sentence that is quoted above and replaced it with a more cautious conclusion (Pgs. 13-14, Lns 436-442):

“To control for changes in global behavioral state due to acoustic trauma, our slope measurements are taken from functions using a sensitivity index (d’), which normalizes lick probability according to the false alarm rate determined from the delivery of catch (silent) trials. Thus, overall changes in stress, arousal, or other global behavioral states following acoustic trauma that impacted overall responsivity in catch and stimulus trials would be controlled for by the d’ measurement. However, as with reaction time, the d’ growth function is not a direct measure of loudness, per se, but instead is probably best likened to the change in stimulus salience for tones of varying intensity.”

The discussion should come back a bit more to the literature and emphasize in more detail what is consistent with the literature, what is not, what is new, etc.

We refocused the Discussion along these lines. We removed text that speculated about topics for future investigation and replaced it with text that more explicitly describes what our experiments show (or don’t show). In addition, we have added new citations on layer-dependent expression of cortical changes after acoustic trauma and behavioral indices of auditory hypersensitivity.

I suggest publishing the behavioral data elsewhere, and to focus here on the neural responses before and after the noise trauma, emphasizing clearly what is new, etc.

We appreciate that the reviewer felt that the behavioral data detracted from the more convincing aspects of the study. Overall, the reviewer’s main concern was whether this behavior was related to the loudness hyperacusis phenotype observed in clinical populations and – more specifically – whether this behavior supported strong conclusions about loudness hypersensitivity at all. As noted in the revisions above, we think these are both fair critiques and we have revised the paper to explicitly state that our study should not be interpreted as a model of hyperacusis, nor that we are directly measuring loudness.

Removing the behavioral data seemed too extreme to us. The reviewer noted that s/he “loved” the behavioral experiment that showed hypersensitivity to direct thalamocortical stimulation. Further, the analysis of the 2-photon data in Figures4-6 is predicated in no small part by the auditory hypersensitivity behavioral finding. We feel that the impact of the manuscript would be greatly reduced by the removal of the behavioral experiments but agree that a more accurate description of the behavior and a more cautious, even-handed interpretation of what the findings mean help to underscore what is new and significant in our work.

Reviewer #3 (Recommendations for the authors):

Specific concerns:

Introduction

The communalities between tinnitus, hyperacusis and speech in noise impairment are unknown, and the association between those and hearing loss are known but not what happens exactly at neural level. It seems extreme to put together normal aging with ADHD, autism, or schizophrenia, and claim that tinnitus, hyperacusis, and speech in noise impairment can be involved in all those conditions.

We make no such claim. The introduction only makes the point that sensory overload (as defined by phantom percepts, hypersensitivity, or heightened distractibility) is very common in the auditory modality and is observed in a wide range of neuropsychiatric disorders. There is nothing extreme or controversial about this point from our perspective. This is a general biology journal, and we feel that it is important to begin the manuscript with an introduction that would interest readers from diverse fields before delving into the narrower topic of acoustic trauma and auditory hypersensitivity.

Line 76: we don't know yet whether neural gain adjustments following adult-onset hearing loss overshoot the mark. In fact, Koops, Renken, Lanting and van Dijk claimed that cortical tonotopic map changes in humans are larger in hearing loss than in additional tinnitus. J Neurosci. 2020 Apr 15; 40(16): 3178-3185. doi: 10.1523/JNEUROSCI.2083-19.2020

The evidence is overwhelmingly clear in animal models that adult-onset sensorineural hearing loss induces excess neural gain, particularly at the level of the auditory cortex. This has been documented in dozens of studies, many of which are cited here. Further, as the reviewer notes, the Koops 2020 study in human subjects explicitly makes the point that subjects with high-frequency hearing loss exhibit increased central gain compared to normal hearing controls. This is the same point we are making and what our animal model of hearing loss most closely approximates. We are not attempting to model or study tinnitus and make no claims to this effect. That the gain elevation is not as steep in human subjects with both hearing loss and tinnitus has no real bearing on the point we are making here, as we have no indication one way or another that the mice in our study experienced tinnitus.

Please stay consistent with the use of trauma, acoustic trauma, sound overexposure, and high-frequency sound exposure throughout the manuscript.

Thanks for this suggestion. We agree that the wording was confusing on this point. We consistently use “trauma” and “sham” in the figures, where space is often tight. In the text, we use “acoustic trauma” and “sham” to refer to the sound exposure conditions. The revised manuscript has no occurrences of “sound overexposure” or “high-frequency sound exposure”.

Results

A clarification about the percentage of animals that develop loudness hypersensitivity after sound overexposure is necessary. As previously mentioned hyperacusis and tinnitus are not happening to all animal models of sensorineural hearing loss overexposed to high-frequency sounds. How the authors distinguish between animals with sensorineural hearing loss and animals with associated tinnitus and/or hyperacusis?

As mentioned in the response to the public reviews, we appreciate the reviewers’ comments and acknowledge the limitations in relating our behavioral measurement to hyperacusis. The revised figure 2 features a new panel showing the fold change in d’ slope relative to baseline over the post-exposure period for each individual mouse. As described in the response to the public reviews, we noted a variable degree of increased sensitivity to mid-frequency tones in all mice after acoustic trauma but in only one sham-exposed mouse.

Perceptual hypersensitivity following noise-induced high-frequency sensorineural hearing loss.

Please state more precisely the differences between hyperacusis and hypersensitivity to loud sounds.

The revised text clarifies the relationship between our findings and the clinical phenomena of loudness recruitment and hyperacusis.

For loudness recruitment, we have added the following text on Pg. 6, Lns 148-154.

“Persons with sensorineural hearing loss commonly report loudness recruitment, a disproportionately steep growth of loudness with increasing sound level, that has been accounted for by altered basilar membrane mechanics after OHC damage (Oxenham and Bacon, 2003). However, loudness recruitment is most pronounced for frequencies within the range of hearing loss and at intensities well above sensation level (Buus and Florentine, 2002; Moore, 2004). Here, we observed steeper growth of auditory sensitivity for a spared frequency and at intensities close to sensation level, neither of which would be expected of a purely peripheral origin.”

For hyperacusis and behavioral measures of loudness, we made text changes throughout the manuscript as well as revising the following paragraph, Pg. 14, Lns 448-466.

“While the findings presented here support an association between sensorineural peripheral injury, excess cortical gain, and behavioral hypersensitivity, they should not be interpreted as providing strong evidence for these factors in clinical conditions such as tinnitus or hyperacusis. Our data have nothing to say about tinnitus one way or the other, simply because we never studied a behavior that would indicate phantom sound perception. If anything, one might expect that mice experiencing a chronic phantom sound corresponding in frequency to the region of steeply sloping hearing loss would instead exhibit an increase in false alarms on high-frequency detection blocks after acoustic trauma, but this was not something we observed. Hyperacusis describes a spectrum of aversive auditory qualities including increased perceived loudness of moderate intensity sounds, a decrease in loudness tolerance, discomfort, pain, and even fear of sounds (Pienkowski et al., 2014a). The affective components of hyperacusis are more challenging to index in animals, particularly using head-fixed behaviors, though progress is being made with active avoidance paradigms in freely moving animals (Manohar et al., 2017). Our noise-induced high-frequency sensorineural hearing loss and Go-NoGo operant detection behavior were not designed to model hyperacusis. Hearing loss is not strongly associated with hyperacusis, where many individuals have normal hearing or have a pattern of mild hearing loss that does not correspond to the frequency dependence of their auditory sensitivity (Sheldrake et al., 2015). While the excess central gain and behavioral hypersensitivity we describe here may be related to the sensory component of hyperacusis, this connection is tentative because it was elicited by acoustic trauma and because the detection behavior provides a measure of stimulus salience, but not the perceptual quality of loudness, per se.”

Explain why only wave 1 was used to explore the auditory thresholds in ABRs when wave 5, for example, would be easy to identify.

A vertical electrode montage (pinna-vertex) is more sensitive the electrical dipole generated by central auditory structures downstream of the cochlear nucleus that are the primary contributors to waves 3-5. Instead, we use a horizontal (pinna-pinna) electrode montage due to the surgical procedure preventing electrode placement on top of the head, which is sensitive to the dipole generated by synchronized activity of the spiral ganglion neurons (wave 1) and cochlear nucleus (wave 2) but waves 3-5 are not easily discernable. This has been explained in many of our previous publications (e.g., Chambers et al., Neuron 2016) and also in seminal reports (Galbraith et al., J Neurosci Methods 2006). Perhaps most importantly, later ABR waves can reflect central gain changes, while the main point of Figure 1 is to provide a comprehensive characterization of the peripheral sensorineural damage at the level of stereocilia damage, hair cell death, primary afferent synapse degeneration, otoacoustic emissions, and peripheral neural response. Wave 1 is most suitable component of the ABR for this purpose.

We revised the Methods to clarify this point on Pg. 20 Lns 689-691,

“To highlight the peripheral neural contribution to the ABR, subdermal electrodes were positioned in a horizontal (pinna-to-pinna) montage (Galbraith et al., 2006; Melcher et al., 1996).”

It is not clear how far the animals were tested and the survival time of the animals between the hearing loss triggering and the histology showing loss of synaptic contacts in IHCs in the base of the cochlea and death of OHC in the base with stereocilia changes in base and mid coils. In the different figures, there are different days and groups of days depending on the figure. Please clarify.

The preparation of cochlear materials for the histopathology analysis was performed 21 days after exposure. This point has been clarified by the addition of Supplementary File 1, which provides the timing of each procedure across all mice and through changes to the text on Pg. 5, Lns 113-114:

“Post-mortem cochlear histopathology performed 21 days after noise exposure suggested an anatomical substrate for cochlear function changes due to acoustic trauma.”

Figure 1 panels J, K. Change the colours to make them more different. Why there is a threshold elevation at 8 kHz in the first two days?

We adjusted the colors in Figure 1J-K so that they are more easily discriminable. The temporary threshold elevation at 8 kHz in the first 24 hours after acoustic trauma reflects TTS (temporary threshold shift). TTS is typically observed over a wider frequency range than permanent threshold shift.

Behavioural hypersensitivity to cochlear lesion edge frequencies and direct auditory thalamocortical activation after SNHL.

Would it be possible that the differential distribution of damage between the IHCs and OHCs (only base versus base and middle coil) could explain the changes in ABRs, DPOAE and behavioural thresholds?

Yes, certainly. Behavioral thresholds (Figure 1J-K) exhibited lasting elevation at 32kHz but not 8 kHz. That is easily explainable by observation of OHC and synaptic degeneration at the 32 kHz region of the cochlea but not the 8 kHz region. ABR and DPOAE threshold elevation is <10dB up to 16 kHz but then increases to 30-40dB above 16 kHz. ABR and DPOAE thresholds are less sensitive to afferent lesions (IHC or synaptopathy) and are mostly reflective of OHC damage. OHC elimination was observed at the extreme base (Figure 1E), which can account for some of ABR and DPOAE threshold elevation, but the changes in the 22-45 kHz range arises from more subtle damage to OHCs. Some OHC stereocilia damage at lower regions of the cochlear frequency map can be observed at the light microscopic level (Figure 1G) but resolving the physical substrate of the ABR and DPOAE threshold elevation at lower frequencies requires techniques that can go below the diffraction limit (e.g., electron microscopy).

Please clarify why in the combined optogenetic (ChR2) and auditory stimulation, only a high frequency centred at 32 KHz. Is the optogenetic stimulation affecting all frequency ranges in vMGN or only high frequency?

Trials of thalamocortical stimulation were interleaved with high-frequency acoustic stimulation as a positive control; we wanted to show that enhanced sensitivity to central auditory stimulation was accompanied by decreased sensitivity to sound frequencies corresponding to the high frequency cochlear damage. Further, by keeping the 32kHz trials at 50% in both versions of the behavioral detection task (where the other 50% were more easily detected low-frequency tones or optogenetic stimulation), we maintained the overall rates of miss trials and reward between the two variants of the behavioral task.

We have no insight into the frequency tuning of the virally transduced MGB neurons. There is no reason to believe our injections targeted MGB neurons tuned to any particular range of sound frequencies.

In Figure 2, panel E it seems that the psychometric curve in the baseline condition for the hearing loss animals is shifted to the right compared with the baseline in the sham condition. It would be easier to compare them if they are placed in the same panel.

Thanks, the reviewer’s critique helped us improve this figure. We selected a more representative trauma mouse for Figure 2E and revised Figure 2F to indicate which lines correspond to the trauma and sham exemplars shown at left. For the example mice, it was important to show the changes within a mouse that occur (or fail to occur) after noise exposure relative to baseline testing. The quantification of the optogenetic behavior is also based on changes within an animal in the post-exposure period relative to baseline because the particular growth functions vary from mouse to mouse based on fiber placement, transduced neurons, and the optical interface between the fiber and the transduced neurons. For this reason, we determined the most informative contrast was to include data from different behavioral sessions within a mouse rather than a single session for different mice.

Chronic imaging in A1 reveals tonotopic remapping and dynamic spatiotemporal adjustments in neural response gain after acoustic trauma

The authors mentioned that tonotopic remapping was limited to the deafferented zone where the proportion of layer 2/3 pyramidal neurons preferentially tuned to lesion edge frequencies is more than doubled following acoustic trauma without any systematic change in response. Please specify the lack of systematic change.

Pg. 7 Lns 200-203 state,

“Analysis of all tone-responsive PyrNs (n = 1,749 in 4 trauma mice; n = 1,748 in 4 sham mice) demonstrated that tonotopic remapping was limited to the deafferented zone (Figure 3G), where the percentage of L2/3 PyrNs preferentially tuned to lesion edge frequencies more than doubled following acoustic trauma (Figure 3H) without any systematic change in response threshold (Figure 3I).”

We were specifically referring to the minimum response threshold. This analysis was inspired by Dexter Irvine’s publications on cortical plasticity after hearing loss, which showed that tonotopic remapping versus residual tuning can be distinguished by whether the preferred frequency changes without an elevation in response threshold.

The identification of the frequency edge is convincing in the widefield but less convincing at the neural level. How the deafferentation boundary was established? Please specific the rational for the use of those four specific frequencies (32, 5.7, 8, 11.3)

On lines 783-787, we state,

“To delineate the intact and deafferented regions of the imaging field, a support vector machine (SVM) was calculated for each mouse using BF’s determined from the first day of imaging prior to noise exposure. The SVM deafferentation boundary categorized physical space (intact: BF < 16 kHz, deafferented: BF ≥ 16 kHz) and its physical location was then imposed on all successive imaging sessions after alignment of fields-of-view from all imaging sessions.”

The four frequencies were selected such that we had one frequency corresponding to the region of sensorineural hearing loss (32k), one frequency far from the cochlear damage (5.7kHz), one edge frequency corresponding to the behavioral experiments (8kHz), and one edge frequency bordering the low-frequency edge of ABR and DPOAE threshold elevation (11.3 kHz).

This is stated on Pg. 7, Lns 214-217:

“Next, we expanded the neural gain analysis in sham and trauma mice to four stimulus frequencies: a high-frequency tone aligned to the damaged cochlear region (32 kHz), a spared low-frequency tone far from the cochlear lesion (5.7 kHz), and two spared mid-frequency tones near the edge of the cochlear lesion (8 and 11.3 kHz).”

Cortical hyperresponsivity and increased gain mirrors behavioural hypersensitivity to spared low-frequency inputs.

This section is not particularly convincing and figure 5 is difficult to follow. For example, in panel D it is impossible to distinguish the two different types if sound or silence. Only panel F is clear and easy to follow.

One of the main contributions in this study is that it provides the means to study many key variables related to cortical plasticity after hearing loss that have traditionally only been studied in isolation. For example, 1) the cell’s tonotopic map location relative to zone of cochlear damage is known to be an important determinate of plasticity, 2) as is the length of time separating the measurement and the cochlear injury, as well as 3) the frequency of the stimulus used to elicit a sensory response relative to the region of cochlear damage. Of course, each of these features depend on 4) whether the noise exposure caused sensorineural cochlear damage or was innocuous.

Here, we measure each of the dimensions in single neurons and document a 4-way interaction; in other words, the degree of central gain depends on all four factors. For example, gain changes relative to sham controls are weak with low-frequency tone in the low-frequency portion of the map more than a few days after acoustic trauma. Conversely, excess gain relative to sham controls is greatest for lesion edge frequencies (11.3k) in the deafferented region of the tonotopic map several days after acoustic trauma. Dropping any of these three dimensions diminishes the message of the project, hence all must be shown. As such, a certain degree of complexity is unavoidable in these figures as there are four independent variables. We can think of no more convincing and simple way of plotting the gain changes across these four variables than the line plots in Figure 5A-C. However, we tried to address the reviewer’s comment by making text changes that seek to simplify an inherently complex result (Pg. 8, Lns 220-226):

“This analysis identified several clear results: i) a strong initial uptick in neural gain measured in both topographic regions following trauma; ii) persistent (lasting greater than 1 week) increases in neural gain were observed only for spared mid-frequency tones in deafferented cortical regions; iii) no significant changes in neural gain were observed in sham-exposed mice (Figure 5C). Thus, excess central gain reflected the interaction of four factors: 1) whether the initial sound exposure induced SNHL, 2) where within the cortical frequency map the cell is located, 3) when relative to exposure the measurement is made, and 4) the proximity of the stimulus test frequency to the cochlear lesion.”

For Figure 5D-F, we relate neural response gain to the behavioral result. This was accomplished by constructing a linear SVM decoder where the input is the population activity in auditory cortex and the decoder is trained to distinguish between trials of sound and silence. Before applying the SVM decoder, we first reduce the dimensionality of the cortical ensemble response using PCA. Figure 5D provides a visualization of the SVM decoding by showing the first 2 principal components for single trials of silence and sound. The reviewer commented that it is impossible to distinguish the sound and silence trials in some conditions (e.g., the first few columns) and this is exactly the point. The cortical decoding model struggles to detect low-intensity tones and the overlap in the data is intended to convey that point. Conversely, at high intensities, particularly after acoustic trauma the blue and gray data points are well separated. The reviewer mentions that panel F is difficult to follow, though s/he did not mention why. It simply shows the change in cortical detection for low and high frequency tones before and after acoustic trauma. Detection accuracy improves for low-frequency tones and becomes worse for high frequency tones, paralleling the behavioral results. We tried to address the reviewer’s critique but determined that the current organization of Figure 5A-C is optimal to convey the 4-way interaction between exposure type, map location, stimulus frequency and post-exposure day. Further Figure 5D-F is the optimal way to visualize and convey the findings from the single trial cortical ensemble decoding analysis.

Tracking changes in activity and local synchrony in individual PyrNs over several weeks.

Explain the differences across neurons in the probability to be tracked just for one day or for 15 session. Please explore the possibility that the baseline properties of the neurons impact their tractability in chronic imaging

Like extracellular recordings, 2-photon calcium imaging is biased towards excluding inactive or very sparsely active neurons from the analysis. Cells are “lost” from the chronic tracking for two reasons: 1) they become virtually quiescent and therefore are “invisible” to the regions of interest (ROI) identification process; 2) the geometry of the cell (i.e., the somatic ROI shape) or the geometric separation from other cells change to a degree that we cannot be confident it is the same cell.

We implemented a relatively rigorous process for establishing ROI tracking confidence that is described in Figure 6 —figure supplement 1 and related text. We observed a constant turnover of ROIs (new ROIs gained, prior ROIs lost) that likely reflects the inherent difficulty of returning to exactly the same XYZ coordinates over time. In addition, we noted a large turnover in lost/added ROIs only after acoustic trauma, only during the 48 hours after exposure, and only near the deafferentation boundary in the cortical map. The two most likely explanations for the large uptick in cell turnover is 1) some active neurons become virtually quiescent and vice versa, and/or 2) the topology of the cortex changes in newly deafferented areas, disrupting our ability to track ROIs based on their appearance and relation to other ROIs. To this latter point, prior studies have documented a degradation of the extracellular matrix proteins (e.g., doi: 10.1016/j.heares.2017.04.015) and uptick in neurite motility (doi: 10.1016/j.neuron.2011.12.038) that accompany suden hearing loss. In theory, this could destabilize the physical geometry of pyramidal neuron ensembles near the deafferentation boundary in the first few days after acoustic trauma.

We revised Pg. 9, Lns 257-267 to convey these points with greater clarity,

“Interestingly, we noted that new active PyrNs appeared immediately following noise exposure, while PyrNs tracked throughout the baseline imaging sessions disappeared (Figure 6 —figure supplement 1C). Because the degree of turnover was far less after sham exposure, because the appearance or disappearance of PyrNs after acoustic trauma was concentrated near the deafferentation boundary (Figure 6 —figure supplement 1D), and because approximately 75% of PyrN disappearance occurred within the 48 hours after acoustic trauma (Figure 6 —figure supplement 1E), we concluded that increased PyrN turnover in the imaging field must be related to cortical changes arising indirectly from the cochlear SNHL. Possibilities include large and heterogenous state shifts in activity (from virtually quiescent to active or vice versa) as well as changes in the physical topology of the cortex due to degradation of extracellular matrix proteins and increased neurite motility after sudden hearing loss (Nguyen et al., 2017; Tschida and Mooney, 2012).”

To the reviewer’s comment about baseline properties, we would refer them to the multivariate model described in Figure 8. Here, we use the diversity of baseline properties to account for diverse functional outcomes after hearing loss. This analysis ensures that our imaging approach includes cells with wide-ranging response properties at baseline and can return to them several days later. So, apart from near-complete quiescence, we cannot identify a scenario in which the baseline properties of the neuron impact its tractability for chronic imaging, though they certainly influence the response to hearing loss, as detailed in Figure 8.

The authors mention that following noise overexposure active pyramidal neurons disappeared from the deafferented area, with a permanent increase in response to 8 kHz tome in the deafferented area but not in the low-frequency area. This increase was first for all intensity levels and later only above 50 dB SPL. When this changes happen? After 3 days? When is the beginning of the 2nd stage? Could the authors distinguish between an acute and a chronic phase?

Taking the reviewer’s questions one at a time, changes in the response to an 8 kHz tone over post-exposure day is provided in Figure 6D. The short-term gain increase in low-frequency map areas is strongest hours after the trauma (D0) and is gone by day 3. Likewise, as in shown now in Figure 6 – Supplement 1E, about 70% of the cells that were ‘lost’ disappeared within the first 48 hours after trauma. By contrast, among the relatively small number of PyrNs that were lost in the sham exposure group, only about 20% were lost within the first 48 hours after the innocuous noise exposure.

Taken together, yes, the reviewer is correct in observing a period of acute changes in tone-responsiveness following noise exposure that are followed by stable, sustained hyperresponsivity, as shown in Figures 4-6. These results highlight Days 0-2 as acute and highly dynamic, while Days 3+ show far more stable changes in auditory cortex activity. This timescale is also reflected in both the appearance and disappearance of active pyramidal neurons, where on days 0-2 following noise exposure these changes are the most prominent.

We revised the text on Pg. 10, Lns 304-309 to make this clearer to the reader:

“Taken together, these observations also suggest that plasticity following acoustic trauma is organized into two phases: a dynamic phase during the first 48-72 hours after noise exposure involving topographically widespread hyper-correlation, hyper-responsivity to mid-frequency sounds in the intact map regions, and large-scale turnover in PyrN stability around the deafferentation map boundary, followed by a stable phase beginning 3 days after exposure where gain is increased for spared tone frequencies in deafferented map regions.”

Predicting the degree of excess central gain after trauma in individual PyrNs based on their pre-exposure response features

Please expand your explanation about potential heterogeneity. I am not certain whether the authors talk about two different groups of pyramidal neurons or a gradient between low to high spontaneous activity at baseline.

In Figure 8D, we show that there is a gradient over which baseline spontaneous activity may have a relationship with post-exposure changes in response gain. In our linear model, most predictor variables are continuous, which is explained in the Figure 8 legend and the Methods, and so we expect that there is a continuous relationship over which each of these variables relates to the post-exposure gain. Given the results of the linear model, and to offer a reasonable and readable interpretation, in Figure 8G we provide a graphical summary of the model results. Here, we highlight two ‘classes’ of neurons (those with stable gain and those with increased gain), but we also include graphics indicating a gradient of influence for each predictor variable, as this is how linear models work for continuous variables.

Again here the temporal limit is different and they use 3.5 days. Please explain the rationale of that and use always the same time points with justification.

I am curious to know what will be the result of those functions about central gain if other frequencies were used.

We used 3-5 days post-exposure because the initial phase of acute reorganization and rapid neural dropout has ended and the expression of plasticity is equivalent to what is observed at longer post-exposure observation periods. Thus, by 3-5 days after trauma, the reorganization has stabilized while affording us the greatest number of chronically tracked PyrNs to include in our model (which helps to avoid overfitting).

Multivariate linear regression is limited when it comes to including separate imaging sessions in the model. There are more complex modeling approaches that can account for multiple timescales (e.g., tensor decomposition) but we were not in a position to attempt those modeling approaches, so we determined that selecting a single post-exposure period of 3-5 days was the most logical and informative way to go. The reviewer makes a good point that the rationale for this decision was not made clear and we have revised the text on Pgs. 10-11 Lns 327-329 to address this point:

“To avoid overfitting the regression model, we selected a single timepoint – 3-5 days after noise exposure – that corresponded to the stable phase of reorganization after acoustic trauma, while maximizing our sample size of chronically tracked PyrNs.”

As for modeling the changes in the same neurons at other stimulus frequencies, our sample size limited how many terms we could add into the model without overfitting and without running into problems with multiple comparisons from the same sample. For these reasons, we limited our model to a single frequency and determined that 8kHz was the best choice as it was the frequency used for behavioral testing (Figure 2) as well as the neural classifier (Figure 5).

Discussion

Please state the differences between the hearing loss in the animal model by acute sound overexposure from the majority of ageing sensorineural hearing loss (presbyacusis) where the changes are more gradual.

Excess central gain is the hypothesized neural substrate for loudness recruitment and generalised hyperacusis, but can the authors think about other potential causes?

Both presbycusis and noise-induced hearing loss feature afferent synapse loss, high-frequency threshold elevation, and high-frequency outer hair cell stereocilia pathology. Animal models for age-related hearing loss generally identify additional breakdown in the tight cellular junctions and ion transporter proteins that power the endocochlear potential that provides additional low-frequency threshold shift that is not typically found in acoustic trauma.

In terms of the reviewer’s questions, yes, the revised text now mentions that loudness recruitment can reflect altered basilar membrane mechanics on Pg. 6, Lns 148-150: Persons with sensorineural hearing loss commonly report loudness recruitment, a disproportionately steep growth of loudness with increasing sound level, that has been accounted for by altered basilar membrane mechanics after OHC damage (Oxenham and Bacon, 2003).

Other underlying contributions to excess central gain and associated auditory perceptual hypersensitivity disorders is discussed on Pgs. 3-4 Lns 58-89 and Pgs. 12-13 Lns 368-426.

Line 353. Typo long spaces in the same line.

Fixed.

Why and what for pyramidal neurons in layers 2-3 with low spontaneous activity and greater responses to low-intensity tones show more stable gain control after trauma? I do not understand what "stochastic variations in inhibitory tone between different microcircuit milieus" means. Is it about the different types and functions of inhibitory interneurons in the cortex? Can neuromodulators such as ACh play a role too?

The design of our study limits our insight into questions related to “why and what for”. However, the Discussion speculates on this point that L2/3 pyramidal neurons embedded into microcircuits that impose stronger constitutive inhibition would be associated with more non-monotonic intensity growth functions and lower spontaneous firing rates. Conversely, microcircuit milieus that impose weaker feedforward inhibition onto pyramidal neurons might be reflected in higher spontaneous rates and more linear intensity response growth functions.

This point is made on Pgs. 12-13, Lns 400-405.

“In particular, PyrNs with low spontaneous activity rates and non-monotonic encoding of sound intensity – features associated with stronger intracortical inhibition (Tan et al., 2007; Wu et al., 2006) – showed more stable gain control after trauma. Conversely, initially high spontaneous activity, hyper-correlation, and gradual monotonic intensity growth functions could be reflective of weaker intracortical inhibitory tone and were features that predicted non-homeostatic excess gain after acoustic trauma.”

Can the different types of neurons be distinguished? Again, what happens when the animals only have hearing loss without tinnitus or hyperacusis? Should be possible that the accumulation of one or more than one symptom after hearing loss depends on the proportion of active cells on the baseline?

These are all interesting questions that cannot be directly addressed by our experiments. We speculated that baseline response features associated with non-homeostatic excess gain (high spontaneous firing rate and gradual monotonic growth of response to 8 kHz tones) might be reflective of weak local inhibitory tone, which in turn would be predictive of poorly regulated gain after acoustic trauma. To the reviewers, question, other than the certainty of knowing that they are L2/3 excitatory pyramidal neurons, we don’t have any evidence one way or the other to say how the pyramidal neurons that show excess gain versus stable gain are intrinsically different (i.e., different morphology, biophysics, genetics, etc.). We have a lot of evidence to say that we can account for 40% of the variance in how they response to acoustic trauma based on where the neurons are located in the tonotopic map and other functional characteristics that are presumably regulated by local and random (i.e., stochastic) variations in local circuit motifs.

The reviewer’s questions about types of inhibitory interneurons, a possible role of neuromodulators, and differences in animals with hearing loss that either do (or do not) have behavioral evidence of tinnitus and hyperacusis are interesting ideas for casual discussion but are not directly related to the experiments that we performed, so we opted not to address these points through text revision.

The authors mention the startle reflex as an indicator of hypersensitivity but usually, animals show a high degree of adaptation to it. Can the sensory component of the hypersensitivity show habituation or sensitization?

From our behavioral measurements, we see stable mouse performance over time in both sound and optogenetic detection, both before and after noise/sham exposure. Given this observation and the consistent trend within groups, we do not see evidence of habituation or sensitization from our measurements. This consistency may come from the difficulty of our tasks. In both procedures, we use stimulus intensities where the mice are operating near their perceptual threshold, which allows for the mice to stay engaged in the task for long periods of time. This point is illustrated by the example adaptive track show in Figure 1I, the consistent behavioral thresholds over days shown in Figure 1K, and the stable gain elevation observed after acoustic trauma in Figures2B and 2F.

Line 439, again many spaces in the sentence.

Fixed

Line 448. Check typos in references.

Fixed

Lines 466-468 needs a reference. Again here, the authors are not identifying tinnitus or even hyperacusis or misophonia?

We have added references to this sentence (Pg. 15, Lns 492-493).

Methods

Give reasons about the time of 9 weeks post-natal when sound overexposure (9 weeks post-natal) and why was done in the morning that corresponds with the period of higher sleep pressure for the mice.

We wanted to model noise-induced high-frequency sensorineural hearing loss in adults. The peripheral and central auditory system is mature by 9 weeks. That is why we chose 9 weeks. Noise-induced cochlear injury is regulated by the circadian clock. Our main intent was to make the exposure time consistent across mice. But we noted prior work showing that the temporary component of the threshold shift was less extreme and less variable for noise exposures occurring during the day, which suggested that morning was a good choice for the exposure time (see Meltser et al., Current Biology 2014, http://dx.doi.org/10.1016/j.cub.2014.01.047).

To address the reviewer’s comment we have revised Ln. 526-528 of the Methods to read,

“Noise exposure occurred at 9 weeks postnatal and was timed to occur in the morning, when the temporary component of the threshold shift is less extreme and variable (Meltser et al., 2014).”

It is difficult to follow the different animals in the experimental design. Please use a table

To address the reviewer’s request, we added Supplementary File 1, which provides timing for all experimental animals, now referenced on Pg. 16, Ln 539.

Please specify if the whole ventral division of MGB was targeted with the fibre optic or only the high-frequency region. Was the fibre optic present but inactive in MGB for the control experiments?

There is no way to restrict the cone of light or the virus expression only to the high frequency BF/CF region of the MGB tonotopic map using the approach described in the Methods. The control group shown in Figure 2 is a sham-exposed control. The treatment of these mice differs only in the sound pressure level of the noise exposure. The preparation for the optogenetic experiment is the same including the virus injection and fiber implant, as evidenced by Figure 2E, which shows the psychometric detection function for direct thalamocortical activation in a control mouse.

Line 571-3. The range of sound levels and laser powers were tailored to each mouse prior to noise exposure to ensure equivalent sampling of sound and laser perceptual growth functions. Please expand the explanation by giving further information about differences across cases.

To study changes in the perceptual salience of near-threshold sounds, we used adaptive staircasing (i.e., Method of Limits) to present sounds just above and below threshold (Figure 2A-B). For the combined optogenetic and acoustic task, we wanted to show changes in performance for a fixed set of stimuli before and after acoustic trauma or sham exposure (i.e., Method of Constant Stimuli). Subtle differences in the amount of opsin expression, the position of the fiber relative to the transduced cells, and the optical properties of the fiber-tissue interface required that we find the range of laser powers and 32 kHz SPLs that spanned the dynamic range of the mouse – from undetectable to detectable with near certainty. Once we had found this range for each mouse, we then proceeded to use these same 5 laser levels and sound levels for all subsequent testing sessions after sound exposure.

We thank the reviewer for pointing out that these details were missing from the Methods. We have revised the text on Pgs. 18-19, Lns. 614-625 to read:

“For testing, randomized interleaved blocks of either noise or laser stimulation were presented at a fixed range of levels. The range of sound levels and laser powers were individually tailored prior to noise/sham exposure to ensure equivalent sampling of sound and laser perceptual growth functions, and then these fixed values were used for all post-exposure testing sessions. Tailoring was accomplished by first identifying the lowest laser power and 32kHz sound level that produced at least 95% hit rates (operationally defined as “max”). These sound levels and laser powers were then presented alongside four attenuated levels relative to the maximum as well as no-stimulus catch trials in each mouse on every session. Psychometric functions were fit using binary logistic regression, and threshold was defined as the point where detection crossed 71% correct, which is the closest approximation to the threshold point identified with the 2-up, 1-down staircasing procedure described above. Runs were rejected for further analysis if the false alarm rate of the mouse was above 30%, and again this resulted in the exclusion of <5% of sessions.”

Line 584-585. It is bizarre using a scalpel to do a craniotomy instead of a dental drill.

The mouse skull is very thin. Drilling can introduce vibration and bruising. We have been performing mouse craniotomies with a scalpel for nearly 15 years, as described in ~40 publications from our lab. In our hands, neural responses are superior to what we get with a dental drill, so it does not seem bizarre to us.

https://doi.org/10.7554/eLife.80015.sa2

Article and author information

Author details

  1. Matthew McGill

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Division of Medical Sciences, Harvard Medical School, Boston, United States
    Contribution
    Conceptualization, Data curation, Formal analysis, Funding acquisition, Methodology, Writing - original draft, Writing – review and editing
    For correspondence
    mmcgill@g.harvard.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-2322-9580
  2. Ariel E Hight

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Division of Medical Sciences, Harvard Medical School, Boston, United States
    Present address
    Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, United States
    Contribution
    Conceptualization, Data curation, Formal analysis, Writing – review and editing
    Competing interests
    No competing interests declared
  3. Yurika L Watanabe

    Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    Contribution
    Data curation, Writing – review and editing
    Competing interests
    No competing interests declared
  4. Aravindakshan Parthasarathy

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, United States
    Present address
    Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, United States
    Contribution
    Data curation, Formal analysis, Writing – review and editing
    Competing interests
    No competing interests declared
  5. Dongqin Cai

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, United States
    Contribution
    Data curation, Writing – review and editing
    Competing interests
    No competing interests declared
  6. Kameron Clayton

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, United States
    Contribution
    Methodology, Writing – review and editing
    Competing interests
    No competing interests declared
  7. Kenneth E Hancock

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, United States
    Contribution
    Software, Methodology
    Competing interests
    No competing interests declared
  8. Anne Takesian

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, United States
    Contribution
    Supervision, Writing – review and editing
    Competing interests
    No competing interests declared
  9. Sharon G Kujawa

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, United States
    Contribution
    Supervision, Writing – review and editing
    Competing interests
    No competing interests declared
  10. Daniel B Polley

    1. Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, United States
    2. Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, United States
    Contribution
    Conceptualization, Supervision, Writing - original draft, Writing – review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5120-2409

Funding

National Institute on Deafness and Other Communication Disorders (DC018974-02)

  • Matthew McGill

National Institute on Deafness and Other Communication Disorders (DC014871)

  • Ariel E Hight

Nancy Lurie Marks Family Foundation

  • Anne Takesian
  • Daniel B Polley

National Institute on Deafness and Other Communication Disorders (DC009836)

  • Daniel B Polley

National Institute on Deafness and Other Communication Disorders (DC015857)

  • Sharon G Kujawa
  • Daniel B Polley

National Institute on Deafness and Other Communication Disorders (DC018353)

  • Anne Takesian

The funders had no role in study design, data collection, and interpretation, or the decision to submit the work for publication.

Acknowledgements

These studies were supported by a grant from the Nancy Lurie Marks Family Foundation (DP and AT), NIH grant DC009836 (DP), DC015857 (DP and SK), DC018353 (AT), and NIH fellowship DC018974-02 (MM) and DC014871 (AH). We thank MC Liberman and A Indzhykulian for their assistance with hair cell stereocilia imaging.

Ethics

The study was approved by the human subjects Institutional Review Board at Mass General Brigham and Massachusetts Eye and Ear. Data analysis was performed on deidentified data, in accordance with the relevant guidelines and regulations.

All procedures were approved by the Massachusetts Eye and Ear Animal Care and Use Committee and followed the guidelines established by the National Institutes of Health for the care and use of laboratory animals.

Senior Editor

  1. Barbara G Shinn-Cunningham, Carnegie Mellon University, United States

Reviewing Editor

  1. Brice Bathellier, CNRS, France

Reviewers

  1. Brice Bathellier, CNRS, France
  2. Arnaud Norena, CNRS, France
  3. Victoria M Bajo Lorenzana, University of Oxford, United Kingdom

Publication history

  1. Received: May 5, 2022
  2. Preprint posted: May 25, 2022 (view preprint)
  3. Accepted: September 14, 2022
  4. Accepted Manuscript published: September 16, 2022 (version 1)
  5. Version of Record published: October 12, 2022 (version 2)

Copyright

© 2022, McGill et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 695
    Page views
  • 181
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Matthew McGill
  2. Ariel E Hight
  3. Yurika L Watanabe
  4. Aravindakshan Parthasarathy
  5. Dongqin Cai
  6. Kameron Clayton
  7. Kenneth E Hancock
  8. Anne Takesian
  9. Sharon G Kujawa
  10. Daniel B Polley
(2022)
Neural signatures of auditory hypersensitivity following acoustic trauma
eLife 11:e80015.
https://doi.org/10.7554/eLife.80015
  1. Further reading

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Andrew McKinney, Ming Hu ... Xiaolong Jiang
    Research Article

    The locus coeruleus (LC) houses the vast majority of noradrenergic neurons in the brain and regulates many fundamental functions including fight and flight response, attention control, and sleep/wake cycles. While efferent projections of the LC have been extensively investigated, little is known about its local circuit organization. Here, we performed large-scale multi-patch recordings of noradrenergic neurons in adult mouse LC to profile their morpho-electric properties while simultaneously examining their interactions. LC noradrenergic neurons are diverse and could be classified into two major morpho-electric types. While fast excitatory synaptic transmission among LC noradrenergic neurons was not observed in our preparation, these mature LC neurons connected via gap junction at a rate similar to their early developmental stage and comparable to other brain regions. Most electrical connections form between dendrites and are restricted to narrowly spaced pairs or small clusters of neurons of the same type. In addition, more than two electrically coupled cell pairs were often identified across a cohort of neurons from individual multi-cell recording sets that followed a chain-like organizational pattern. The assembly of LC noradrenergic neurons thus follows a spatial and cell type-specific wiring principle that may be imposed by a unique chain-like rule.

    1. Neuroscience
    Ana Luisa de A. Marcelino, Owen Gray ... Tom Gilbertson
    Research Article

    Every decision that we make involves a conflict between exploiting our current knowledge of an action's value or exploring alternative courses of action that might lead to a better, or worse outcome. The sub-cortical nuclei that make up the basal ganglia have been proposed as a neural circuit that may contribute to resolving this explore-exploit 'dilemma'. To test this hypothesis, we examined the effects of neuromodulating the basal ganglia's output nucleus, the globus pallidus interna, in patients who had undergone deep brain stimulation (DBS) for isolated dystonia. Neuromodulation enhanced the number of exploratory choices to the lower value option in a 2-armed bandit probabilistic reversal-learning task. Enhanced exploration was explained by a reduction in the rate of evidence accumulation (drift rate) in a reinforcement learning drift diffusion model. We estimated the functional connectivity profile between the stimulating DBS electrode and the rest of the brain using a normative functional connectome derived from heathy controls. Variation in the extent of neuromodulation induced exploration between patients was associated with functional connectivity from the stimulation electrode site to a distributed brain functional network. We conclude that the basal ganglia's output nucleus, the globus pallidus interna, can adaptively modify decision choice when faced with the dilemma to explore or exploit.