1. Neuroscience
Download icon

Microsecond interaural time difference discrimination restored by cochlear implants after neonatal deafness

  1. Nicole Rosskothen-Kuhl  Is a corresponding author
  2. Alexa N Buck
  3. Kongyan Li
  4. Jan WH Schnupp  Is a corresponding author
  1. Department of Biomedical Sciences, City University of Hong Kong, China
  2. Neurobiological Research Laboratory, Section for Clinical and Experimental Otology, University Medical Center Freiburg, Germany
  3. CityU Shenzhen Research Institute, China
Research Article
  • Cited 6
  • Views 880
  • Annotations
Cite this article as: eLife 2021;10:e59300 doi: 10.7554/eLife.59300

Abstract

Spatial hearing in cochlear implant (CI) patients remains a major challenge, with many early deaf users reported to have no measurable sensitivity to interaural time differences (ITDs). Deprivation of binaural experience during an early critical period is often hypothesized to be the cause of this shortcoming. However, we show that neonatally deafened (ND) rats provided with precisely synchronized CI stimulation in adulthood can be trained to lateralize ITDs with essentially normal behavioral thresholds near 50 μs. Furthermore, comparable ND rats show high physiological sensitivity to ITDs immediately after binaural implantation in adulthood. Our result that ND-CI rats achieved very good behavioral ITD thresholds, while prelingually deaf human CI patients often fail to develop a useful sensitivity to ITD raises urgent questions concerning the possibility that shortcomings in technology or treatment, rather than missing input during early development, may be behind the usually poor binaural outcomes for current CI patients.

Introduction

For patients with severe to profound sensorineural hearing loss, cochlear implants (CIs) can be enormously beneficial, as they often permit spoken language acquisition, particularly when CI implantation takes place early in life (Kral and Sharma, 2012). Nevertheless, the auditory performance achieved by CI users remains variable and falls a long way short of natural hearing.

For example, good speech understanding in the presence of competing sound sources requires the ability to separate speech from background. This is aided by ‘spatial release from masking’, a binaural phenomenon, which relies on the brain’s ability to process binaural spatial cues, including interaural level and time differences (ILDs and ITDs) (Ellinger et al., 2017). While bilateral cochlear implantation is becoming more common (Litovsky, 2010; Conti-Ramsden et al., 2012; Ehlers et al., 2017), bilateral CI recipients still perform poorly in binaural tasks such as sound localization and auditory scene analysis, particularly when multiple sound sources are present (van Hoesel, 2004; van Hoesel, 2012). Indeed, while normal hearing (NH) human listeners may be able to detect ITDs as small as 10 μs (Zwislocki and Feldman, 1956), ITD sensitivity of CI patients, particularly with prelingual onset of deafness, is often poor and sometimes seems completely absent (van Hoesel, 2004; Litovsky, 2010; van Hoesel, 2012; Kerber and Seeber, 2012; Litovsky et al., 2012; Laback et al., 2015; Ehlers et al., 2017).

The reasons for the poor binaural sensitivity of CI recipients are only poorly understood, but two main factors are generally thought to be chiefly responsible, namely: (1) technical limitations of current CI devices and (2) neurobiological factors, such as when the neural circuitry responsible for processing binaural cues fails to develop due to a lack of experience during a presumed ‘critical period’ in early life or when it degenerates during a period of late deafness. These presumed factors could act alone or in combination. Let us first consider the technological issues. The vast majority of CI devices in clinical use operate stimulation, which are variants of the ‘continuous interleaved sampling’ (CIS) method (Wilson et al., 1991). While these technical limitations are substantial, currently few researchers believe that they alone can be fully responsible for the poor binaural acuity observed in CI patients because it is possible to test patients with experimental processors that overcome some of the shortcomings of standard issue clinical devices. When tested with such experimental devices, many postlingually deaf CI users show better ITD sensitivity, with some of the best performers achieving thresholds comparable to those seen in NH peers. In contrast, the ITD performance of prelingually deaf CI users remains invariably poor, with even rare star performers only achieving thresholds of a few hundred microsecond (Poon et al., 2009; Conti-Ramsden et al., 2012; Litovsky et al., 2012; Gordon et al., 2014; Laback et al., 2015; Litovsky and Gordon, 2016; Ehlers et al., 2017). It is this poor performance of prelingually deaf patients even under optimized experimental conditions that led to the suggestion that the absence of binaural inputs during a presumed ‘critical’ period in early childhood may prevent the development of ITD sensitivity (Kral and Sharma, 2012; Kral, 2013; Litovsky and Gordon, 2016; Yusuf et al., 2017).

In this context, it is however important to remember that the terms ‘sensitive’ and ‘critical’ period do not have simple, universally accepted definitions, which may create uncertainty about what exactly a ‘critical period hypothesis of binaural hearing’ proposes. Some authors distinguish ‘strong’ and ‘weak’ critical periods. Both types of critical periods are developmental periods during which the acquisition of a new sensory or sensory-motor faculty appears to be particularly easy. However, after ‘weak’ critical periods, a full mastery of a faculty may still be acquired with a little more effort (Kilgard and Merzenich, 1998), but missing essential experience during a ‘strong’ critical period leads to substantial and irreparable limitations later in life (Knudsen et al., 1984). Perhaps the best studied example of a strong critical period disorder is amblyopia. Amblyopic patients experience an uneven or unbalanced binocular visual stimulation in early life, which leads to a failure of the normal development of the brain’s binocular circuitry. This, in turn, causes sometimes dramatic impairments in the visual acuity in the ‘weaker eye’, as well as in stereoscopic vision. These impairments can only be fully reversed if diagnosed and treated prior to critical period closure, and despite substantial research efforts, no interventions performed after critical period closure can offer more than partial remediation of the deficits (Tsirlin et al., 2015). If we hypothesize that binaural hearing development exhibits a similarly strong critical period, then developing clinical CI processors with better ITD coding might not benefit patients with hearing loss early in life, as it might not be possible to implant these patients early enough to provide them with suitable binaural experience during their (strong) critical period. Their brains would then be unable to learn to take full advantage of the binaural cues that improved CIs provided later in life might deliver.

For neonatally deaf patients, periods of sensory deprivation during development are the norm because profound bilateral hearing loss is hard to diagnose in neonates and measurements of auditory brainstem responses (ABRs) have to be repeated to exclude delayed maturation of the auditory brainstem (Jöhr et al., 2008; Cosetti and Roland, 2010; Arndt et al., 2014). Also, before CI surgery is considered non-invasive alternatives such as hearing aids may be tried first. Finally, risks associated with anesthesia in young babies provide another disincentive for very early implantation (Dettman et al., 2007; Jöhr et al., 2008; Cosetti and Roland, 2010). Altogether, this means by the time of implantation, neonatally deaf pediatric CI patients will typically already have missed out on many months of the auditory input. Consequently, if there is a strong binaural critical period, then this lack of early experience might put near-normal binaural hearing performance forever out of their reach.

Various lines of animal experimentation make such a critical period hypothesis plausible, including immunohistochemical studies that have shown degraded tonotopic organization (Rosskothen-Kuhl and Illing, 2012; Rauch et al., 2016) and changes in stimulation-induced molecular, cellular, and morphological properties of the auditory pathway of neonatally deafened (ND) CI rats (Illing and Rosskothen-Kuhl, 2012; Rosskothen-Kuhl and Illing, 2012; Jakob et al., 2015; Rauch et al., 2016; Rosskothen-Kuhl et al., 2018). Additional studies demonstrate that abnormal sensory input during early development can alter ITD tuning curves in key brainstem nuclei of gerbils (Seidl and Grothe, 2005; Beiderbeck et al., 2018). Furthermore, numerous electrophysiological studies on cats and rabbits have reported significantly lower ITD sensitivity to CI stimulation in the inferior colliculus (IC) (Hancock et al., 2010; Hancock et al., 2012; Hancock et al., 2013; Chung et al., 2019) and auditory cortex (AC) (Tillein et al., 2010; Tillein et al., 2016) after early deafening compared to what is observed in hearing experienced controls.

However, although the ‘strong critical period hypothesis’ of poor ITD sensitivity in CI patients is plausible, it has not yet been rigorously tested. The previous animal studies just mentioned have not investigated perceptual limits of binaural function in behavioral experiments using optimized binaural inputs. Similarly, while we do know that CI patients with NH experience in early childhood usually have a better ITD sensitivity than patients without (Litovsky, 2010; Laback et al., 2015; Ehlers et al., 2017), we do not yet know whether early deaf patients could develop good ITD sensitivity after implantation later in life if they were fitted with CI processors providing with optimized binaural stimulation from the outset. Currently, only research interfaces that are unsuitable for everyday clinical use can deliver the high-quality binaural inputs needed to investigate this question. Consequently, there are currently no patient cohorts who experienced through their CIs the long periods of high-quality ITD information that may be needed for them to become expert at using ITDs, irrespective of any hypothetical critical periods. We cannot at present exclude the possibility that the ND auditory pathway may retain a substantial innate ability to encode ITD even after long periods of neonatal deafness, but that this ability may atrophy after countless hours of binaural CI stimulation through conventional clinical processors which convey no useful ITD information.

Since these possibilities cannot currently be distinguished based on clinical data, animal experimentation is needed, which can measure binaural acuity behaviorally. The first objective here is to examine how much functional ITD sensitivity can be achieved in mature ND animals, which receive bilaterally synchronized CI stimulation capable of delivering ITD cues with microsecond accuracy. Achieving this first objective was the aim of this article. In essence, we attempted to disprove the ‘strong critical period hypothesis for ITD sensitivity development’ by examining whether experimental animals fitted with binaural CIs may be able to achieve good ITD sensitivity without excessive training or complicated interventions, even after severe hearing loss throughout infancy. To achieve this, we used a stimulation optimized for ITD encoding straight after implantation.

We therefore established a new behavioral bilateral CI animal model and setup capable of delivering microsecond precise ITD cues to cohorts of ND rats (early-onset deafness), which received training with synchronized bilateral CI stimulation in young adulthood. These young adult rats learned easily and quickly to lateralize ITDs behaviorally, achieving thresholds as low as ~50 μs, comparable to those of their NH litter mates. We also observed that such ND rats exhibit a great deal of physiological ITD sensitivity in their IC straight after implantation. Our results therefore indicate that, at least in rats, there appears to be no strong critical period for ITD sensitivity.

Results

Early deaf CI rats discriminate ITD as accurately as their normally hearing litter mates

To test whether ND rats can learn to discriminate ITDs of CI stimuli, we trained five ND rats that received chronic bilateral CIs in young adulthood (postnatal weeks 10–14) in a two-alternative forced choice (2AFC) ITD lateralization task (NDCI-B; see Figure 1), and we compared their performance against behavioral data from five age-matched NH rats trained to discriminate the ITDs of acoustic pulse trains (NH-B; see Figure 1Li et al., 2019). Animals initiated trials by licking a center ‘start spout’ and responded to 200 ms long 50 Hz binaural pulse trains by licking either a left or a right ‘response spout’ to receive drinking water as positive reinforcement (Figure 2—figure supplements 1a and 2b; Video 1). Which response spout would give water was indicated by the ITD of the stimulus. We used pulses of identical amplitude in each ear, so that systematic ITD differences were the only reliable cue available (Figure 2—figure supplements 1c,d and 2f). NDCI-B rats were stimulated with biphasic electrical pulse trains delivered through chronic CIs, NH-B rats received acoustic pulse trains through a pair of ‘open stereo headphones’ implemented as near-field sound tubes positioned next to each ear when the animal was at the start spout (Figure 2—figure supplements 2a, see Li et al., 2019 for details). During testing, stimulus ITDs varied randomly.

Timeline and experimental treatment of our three cohorts.

NDCI-B and NDCI-E rats were both neonatally deafened by kanamycin and bilaterally implanted as young adults. Around half of them went into a behavioral training and testing (NDCI-B), while the other half were used for multi-unit recordings of IC neurons directly after bilateral CI implantation. NH-B rats were normal hearing and started a behavioral training and testing as young adults. w: weeks. d: days.

Video 1
Neonatally deafened CI rat performing a two-alternative forced choice ITD lateralization task in a custom-made behavior setup.

The animal initiates trials by licking the center ‘start spout’ and responds to binaural pulse trains by licking either the left or right ‘response spout’ to receive drinking water as positive reinforcement if the response is correct or a time out with the flashing light as negative reinforcement if the response is incorrect. Which response was correct was indicated by the ITD stimulus presented on that trial when the animal licks the center spout.

The behavioral data (Figure 2) were collected over a testing period of around 14 days. For NDCI-B rats, the initial lateralization training started usually 1 day after CI implantation. On average, rats were trained for 8 days before we started to test them on ITD sensitivity. The behavioral performance of each rat is shown in Figure 2, using light blue for NH-B (a–e) and dark blue for NDCI-B (f–j) animals. Figure 2 clearly demonstrates that all rats, whether NH with acoustic stimulation or ND with CI stimulation, were capable of lateralizing ITDs. As might be expected, the behavioral sensitivity varied from animal to animal. To quantify the behavioral ITD sensitivity of each rat, we fitted psychometric curves (see Materials and methods, red lines in Figure 2) to the raw data and calculated the slope of that curve at ITD = 0. Figure 2k summarizes these slopes for NH-B (light blue) and NDCI-B (dark blue) animals.

Figure 2 with 2 supplements see all
ITD psychometric curves of normal hearing acoustically stimulated (NH-B, a–e) and neonatally deafened CI-stimulated rats (NDCI-B, f–j).

Panel titles show corresponding animal IDs. Y-axis: proportion of responses to the right-hand side. X-axis: Stimulus ITD in ms, with negative values indicating left ear leading. Blue dots: observed proportions of ‘right’ responses for the stimulus ITD given by the x-coordinate. Number fractions shown above or below each dot indicate the absolute number of trials and ‘right’ responses for corresponding ITDs. Blue error bars show Wilson score 95% confidence intervals for the underlying proportion ‘right’ judgments. Red lines show sigmoid psychometric curves fitted to the blue data using maximum likelihood. Green dashed lines show slopes of psychometric curves at x = 0. Slopes serve to quantify the behavioral sensitivity of the animal to ITD. Panel (k) summarizes the ITD sensitivities (psychometric slopes) across the individual animal data shown in (a–j) in units of % change in animals’ ‘right’ judgments per μs change in ITD.

The slopes for both groups fell within the same range. Remarkably, the observed mean sensitivity for the NDCI-B animals (0.487%/µs) is only about 20% worse than that of the NH-B (0.657%/µs). Furthermore, the differences in means between experimental groups (0.17%/µs) were so much smaller than the animal-to-animal variance (~0.73 %2/µs2) that prohibitively large cohorts of animals would be required to have any reasonable prospect of finding a significant difference. Indeed, we performed a Wilcoxon test on the slopes and found no significant difference (p=0.4375). Similarly, both cohorts showed similar 75% correct lateralization thresholds (median NH-B: 41.5 µs; median NDCI-B: 54.8 µs; mean NH-B: 79.9 µs; mean NDCI-B: 63.5 µs). Remarkably, the ITD thresholds of our ND-CI rats are thus orders of magnitude better than those reported for prelingually deaf human CI patients, who often have ITD thresholds too large to measure and often in excess of 3000 µs (Litovsky et al., 2010; Ehlers et al., 2017). Thresholds in ND rats were not dissimilar from the approx. 10–60 µs range of 75% correct ITD discrimination thresholds reported for normal human subjects tested with noise bursts (Klumpp and Eady, 1956), and pure tones (Zwislocki and Feldman, 1956), or the ≈40 µs thresholds reported for NH ferrets tested with noise bursts (Keating et al., 2013a).

Varying degrees and types of ITD tuning are pervasive in the neural responses in the IC of ND rats immediately after adult cochlear implantation

To investigate the amount of physiological ITD sensitivity present in the hearing inexperienced rat brain, we recorded responses of n = 1140 multi-units in the IC of four young adult ND rats (NDCI-E; see Figures 1 and 3). These rats were litter mates of the behavioral ND animals (NDCI-B) and were stimulated by isolated, bilateral CI pulses with ITDs varying randomly over a ±160 μs range (ca 123% of the rat’s physiological range Koka et al., 2008). For the cohort of neonatally deafened rats (NDCI-E), the CI implantation and the electrophysiology measurements in the IC were performed on the same day with no prior electric hearing experience. The stimuli were again biphasic current pulses of identical amplitude in each ear, so that systematic differences in responses can only be attributed to ITD sensitivity (see Figure 2—figure supplements 1c-d). Responses of IC neurons were detected for currents as low as 100 μA. Figure 3 shows a selection of responses as raster plots and corresponding ITD tuning curves, as a function of firing rate (Figure 3, #1–4).

Examples of interaural time difference (ITD) tuning curves of neonatally deafened CI rats (NDCI-E) as a function of ITD.

Dot raster plots are shown above the corresponding ITD tuning curves. The multi-units shown were selected to illustrate some of the variety of ITD tuning curve depths and shapes observed. In the raster plots, each blue dot shows one spike. Alternating white and green bands show responses to n = 30 repeats at each ITD shown. Tuning curve response amplitudes are baseline corrected and normalized relative to the maximum of the mean response across all trials, during a period of 3–80 ms post-stimulus onset. Error bars show SEM. Above each sub-panel we show signal-to-total variance ratio (STVR) values to quantify ITD tuning. Panels are arranged top to bottom by increasing STVR. ITD > 0: ipsilateral ear leading; ITD < 0: contralateral ear leading.

For NDCI-E animals, ITD tuning varied from one recording site to the next both in shape and magnitude and firing rates clearly varied as a function of ITD values (Figure 3). While many multi-units showed typical short-latency onset responses to the stimulus with varying response amplitudes (Figure 3, #1, #3), some showed sustained, but still clearly tuned, responses extending for up to 80 ms or longer post-stimulus (Figure 3, #4). Although the interpretation of tuning curves is complex the shapes of ITD tuning curves we observed in rat IC (Figure 3) resembled mostly the ‘peak’, ‘monotonic sigmoid’, ‘trough’, and ‘multi-peak’ shapes previously described in the IC of cats (Smith and Delgutte, 2007).

To quantify how strongly the neural responses recorded at any one site depended on stimulus ITD, signal-to-total variance ratio (STVR) values were calculated as described in Hancock et al., 2010. It quantifies the proportion of response variance that can be accounted for by stimulus ITD (see Materials and methods). Each sub-panel of Figure 3 indicates STVR values obtained for the corresponding multi-unit, while Figure 4 shows the distributions of STVR values for the NDCI-E cohort (red). For comparison with a similar previous bilateral CI study, Figure 4 also shows the STVR values for the IC of congenitally deaf (blue) cats reported by Hancock et al., 2010 and in which they are referred to as signal-to-noise (SNR) values. When comparing the distributions shown, it is important to be aware that there are significant methodological, as well as species, differences between our study and the study that produced the cat data shown in Hancock et al., 2010, so the cross-species comparison in particular must be done with care. Nevertheless, the distributions clearly show that ITD STVRs in our NDCI-E rats are very good, and also in line with the values reported by others using similar methodologies. For interpretation purposes, an STVR > 0.5 is considered good ITD sensitivity. It is noticeable that the proportion of multi-units with relatively large STVRs values (substantial ITD tuning) is high among the NDCI-E rats with a median STVR value for IC multi-units of 0.362. In comparison, Hancock et al., 2010 showed a lower median STVR (referred to as SNR) for congenitally deaf cats (0.19) as compared to adult deafened cats (0.45). The proportion of rat multi-units that showed statistically significant ITD tuning (p≤0.01), as determined by ANOVA (see Materials and methods), was also very large in ND (1125/1229 ≈ 91%) CI-stimulated rats. Thus, for our rats that were deafened before the onset of hearing, a lack of early auditory experience during what ought to have been a critical period for ITD sensitivity did not produce a measurable decline in overall sensitivity of IC neurons to the ITD of CI stimulus pulses. This is perhaps unexpected given that previous studies comparing ITD tuning in the IC of congenitally deaf white cats with that of hearing experienced, adult deafened wild type cats did report noticeably worse ITD tuning in the congenitally deaf cats (Hancock et al., 2013). Note that congenitally deaf white cats lose hair-cell function between postnatal days 3 and 10 (Mair and Elverland, 1977).

Bar chart shows distributions of interaural time difference (ITD) signal-to-total variance ratio (STVR) values for inferior colliculus (IC) multi-units of neonatally deafened cochlear implanted rats (NDCI-E, red).

STVR value distributions for IC single-unit data recorded by Hancock et al., 2010 for congenitally deaf cats (blue) under cochlear implant stimulation are also shown for comparison and are referred to as SNR in this cat study.

Nevertheless, the results in Figures 3 and 4 clearly show that many IC neurons in the inexperienced, adult midbrain of NDCI-E rats are quite sensitive to changes in ITD of CI pulse stimuli by just a few tens of microsecond, and our behavioral experiments showed that NDCI-B rats (Figure 2f–j) can readily learn to use this neural sensitivity to perform behavioral ITD discrimination with an accuracy similar to that seen in their NH-B litter mates (Figure 2a–e).

Discussion

This study is the first demonstration that, at least in rats, severely degraded auditory experience in early development does not inevitably lead to impaired binaural time processing in adulthood. In fact, the ITD thresholds of our NDCI-B rats (≈50 µs) were as good as the ITD thresholds of NH-B rats Li et al., 2019, and many times better than those typically reported for early deaf human CI patients with thresholds often too large to measure (Litovsky et al., 2010; Gordon et al., 2014; Ehlers et al., 2017). The good performance exhibited by our NDCI-B animals raises the important question of whether early deaf human CI patients might perhaps also be able to achieve near-normal ITD sensitivity if supplied with optimal bilateral CI stimulation capable of delivering adequate ITDs from the first electric stimulation even in the absence of hearing experience during a period what has been thought to be critical for the development of ITD sensitivity. But before we consider translational questions that might be raised by our results, we should address two aspects of this study, which colleagues may find surprising:

First, some studies deemed rats to be a poor model for ITD processing due to medial superior olives (MSOs) with less ITD-sensitive neurons and their limited low-frequency hearing that may result in limited ITD perception (Grothe and Klump, 2000; Wesolek et al., 2010). However, in animals with relatively high-frequency hearing, such as rats, envelope ITD coding through the lateral superior olive is likely to make important contributions (Joris and Yin, 1995). The only previous behavioral study of ITD sensitivity in rats outside of our lab (Wesolek et al., 2010) concluded that rats are unlikely to be sensitive to the interaural phase of relatively low-frequency tones. High-frequency ‘envelope’ ITD sensitivity is also bound to be of great importance in prosthetic hearing given that CIs rarely reach the apex of the cochlea. In Li et al., 2019, we recently demonstrate that NH rats can use ITD cues for 2AFC sound lateralization tasks and thus conclude that, at least to broadband clicks, rats show ITD sensitivity. Here, we focused on broadband acoustic or electrical pulse stimuli that provide plenty of ‘onset’ and ‘envelope’ ITDs and that are processed well even at high carrier frequencies (Joris and Yin, 1995; Bernstein, 2001). That may also explain why our CI rats showed good ITD sensitivity even though our CIs targeted the lower mid-frequency region in each ear, and not the apical region associated with low-frequency hearing. Recent studies in CI patients with late deafness in adults and children have shown that ITDs delivered to mid- and high-frequency cochlear regions can be detected behaviorally (Kan et al., 2016; Ehlers et al., 2017).

Second, electrophysiology studies on congenitally deaf CI cats reported a substantially reduced ITD sensitivity relative to wild-type, hearing experienced cats acutely deafened as adults (Tillein et al., 2010; Hancock et al., 2010; Tillein et al., 2016). These studies recorded neural tuning high up in the auditory pathway (AC and IC); therefore, one cannot be certain whether the reduced sensitivity reflects a fundamental degradation of ITD processing in the olivary nuclei or merely poor maturation of connections to higher order areas, the latter of which may be reversible with experience and training. In the IC of our ND rats, we found significant ITD sensitivity in 91% of recordings sites, compared to only 48% reported for congenitally deaf cats (Hancock et al., 2010) or 62% for ND rabbits (Chung et al., 2019). When tested under optimal stimulation to deliver microsecond precise ITD cues in a naive auditory system, the proportion of ITD-sensitive sites in NDCI-E rats is thus more similar to proportions in adult deafened CI-stimulated cats (84–86%; Smith and Delgutte, 2007; Hancock et al., 2010), rabbits (~75%; Chung et al., 2016; Chung et al., 2019), or gerbils (~74%; Vollmer, 2018). Figure 4 suggests that the ITD STVR seen in our ND-CI rats fall in a similar range of the ITD STVRs previously reported for congenitally deaf CI cats (Hancock et al., 2010). While for cats, the proportion of IC multi-units with large ITD STVR values (>0.5) appears to be reduced in animals lacking early auditory experience, the same does not appear to be the case in our rats. Whether these quantitative differences in physiological ITD sensitivity are due to methodological and/or species differences is not determinable, but we believe that these apparent differences are ultimately unlikely to be important because even the congenitally deaf cats still have a decent number of IC units showing fairly large amounts of ITD sensitivity. In fact, more than 20% of the congenitally deaf cat IC units have STVR values of 0.5 or higher, which indicates rather good ITD sensitivity. It is important to remember that it is unknown how much ITD tuning in the IC or AC is necessary, or whether this is species specific, to make behavioral ITD discrimination thresholds of ≈50 µs possible, as we see in our NDCI-B cohort (Figure 2). However, multi-units such as #3 and #4 shown in Figure 3 change their firing rates as a function of ITD substantially between steps of only 20 µs. These multi-units have STVRs that are not outside the range reported for congenital deaf cats. Thus we cannot conclude from the electrophysiology data alone whether the quantitative differences in ITD sensitivity between these studies would equate to difference in behavioral lateralization performances. The level of physiological ITD sensitivity previously observed in cats (Tillein et al., 2010; Hancock et al., 2010; Tillein et al., 2016) and rabbits (Chung et al., 2019) could be sufficient to enable good behavioral ITD discrimination performance if only these animals could be trained and tested on an appropriate task.

Thus, in our opinion, any previously reported reductions in physiological ITD sensitivity seen in the IC (Chung et al., 2016; Chung et al., 2019) or AC (Tillein et al., 2010; Tillein et al., 2016) of early deaf animals does not seem nearly large enough to fully explain the very poor behavioral ITD thresholds seen in most early deaf humans. And indeed, our own findings that the behavioral ITD sensitivity of our NDCI-B rats compares favorably with that in NH-B animals strongly suggests that the poor ITD sensitivity in human CI patients may well have causes beyond the lack of auditory experience during a presumed critical period.

It is unclear why congenitally deaf cats (Hancock et al., 2010; Hancock et al., 2013) show a modest reduction in neural ITD sensitivity, while NDCI-E rats in the present study seem not to. There are numerous methodological and species differences that might account for this, ultimately relatively small discrepancy. More pertinent for the present discussion is that both preparations exhibit a lot of innate residual ITD sensitivity in their midbrains despite severe hearing impairment throughout their development (Tillein et al., 2010; Hancock et al., 2010; Hancock et al., 2013; Tillein et al., 2016; Chung et al., 2019). We would find it surprising if this physiological ITD sensitivity of IC neurons could not ever be harnessed to inform perceptual decisions in the cats’ and rabbits’ brains. Thus, in light of our new behavioral results in rats, it seems reasonable to expect that, with appropriate rehabilitation and training, neonatally deaf cats and rabbits (and perhaps even humans) might be able to learn to make use of their residual innate physiological ITD sensitivity to perform very well in binaural hearing tasks.

The most striking difference between our results and other previously published work remains that the behavioral ITD discrimination thresholds of our NDCI-B rats are an order of magnitude or more better than those of early deaf human CI patients (Gordon et al., 2014; Litovsky and Gordon, 2016; Ehlers et al., 2017). As mentioned in the introduction, previous authors have proposed that the very poor performance typical of early deaf human CI patients may be due to ‘factors such as auditory deprivation, in particular, lack of early exposure to consistent timing differences between the ears’ (Ehlers et al., 2017), in other words, the critical period hypothesis. However, neonatal deafening and severe hearing loss until reaching developmental maturity did not prevent our NDCI-B rats from achieving very good ITD discrimination performance. Admittedly, there may be species differences at play here. Our ND rats were implanted as young adults, and were severely deprived of auditory input throughout their childhood, but humans mature much more slowly, so even patients implanted at a very young age will have suffered auditory deprivation for a substantially longer absolute time period than our rats. Nevertheless, our results on early deafened CI rats show that lack of auditory input during early development does not need to result in poor ITD sensitivity and is therefore unlikely to be a sufficient explanation for the poor ITD sensitivity found in early deaf CI patients.

Previous studies of the development of the binaural circuitry in animal models also have not provided any strong evidence for a critical period, even if they have pointed to the important role that early experience can have in shaping this circuitry. Most of these studies have focused on low-frequency fine-structure ITD pathways through the MSO, rather than lateral superior olive (LSO) envelope ITD pathways that are likely to be of particular relevance for the CI ITD processing we are studying here. But even if they may not be directly applicable, they are nonetheless somewhat informative for the present discussion. For example, developmental studies in ferrets have shown that the formation of afferent synapses to MSO, one of the main brainstem nuclei for ITD processing, is essentially complete before the onset of hearing (Brunso-Bechtold et al., 1992). In mice, the highly specialized calyxes of Held synapses, which are thought to play key roles in relaying precisely timed information in the binaural circuitry, have been shown to mature before the onset of hearing (Hoffpauir et al., 2006). In both cases, crucial binaural circuitry elements are completed before any sensory input dependent plasticity can take place. However, there are also studies that do indicate that the developing binaural circuitry can respond to changes in input. For example, in gerbils, key parts of the binaural ITD processing circuitry in the auditory brainstem will fail to mature when driven with strong, uninformative omnidirectional white noise stimulation during development (Kapfer et al., 2002; Seidl and Grothe, 2005), which shows that inappropriate or uninformative sensory input can disturb the development of binaural brainstem pathways. A perhaps related finding by Tirko and Ryugo, 2012 shows that inhibitory pathways in the MSO, which are thought to be essential for ITD encoding, are significantly reduced in congenitally deaf cats at postnatal day 90, compared to NH peers, but they can be fully restored with the advent of CI stimulation after only 3 months. Finally, Pecka et al., 2008 demonstrated the importance of glycinergic inhibition and its timing in the MSO in controlling binaural excitation by fine tuning the delay between arrival from the two ears, which could allow ITD pathways to be ‘tuned’, possibly in an experience dependent manner. Overall, these studies point to varying extents of experience dependence of the developing binaural pathway, but none of them would suggest that the absence of stimulation early in life would necessarily prevent the restoration of effective binaural processing after the closure of some presumed critical period. None of the published articles we could find point to a biological mechanism for a critical period that could explain the loss of ITD sensitivity in early deaf CI users merely as a consequence of an absence of input in early life.

It is well known that the normal auditory system not only combines ITD information with ILD and monaural spectral cues to localize sounds in space, it also adapts strongly to changes in these cues, and can re-weight them depending on their reliability (Keating et al., 2013b; Keating et al., 2015; Tillein et al., 2016; Kumpik et al., 2019). Similarly, Jones et al., 2011 demonstrated changes in ITD and ILD thresholds as head size and pinnae grow for up to 6 weeks postnatally in chinchillas. Again this highlights the importance of plasticity of binaural hearing during development. However, no studies have demonstrated that critical periods in the ITD pathways will irrevocably close if sensory input is simply absent, rather than altered. By using the rat model, which allowed us to study ITD sensitivity behaviorally, we were able to show conclusively that the ability to use ITD cues perceptually does not disappear permanently after hearing loss during a presumed critical period.

Given that our results cast doubts on the critical period hypothesis, it may be time to consider other likely causes for ITD insensitivity in CI patients. One possibility that we believe has not been given enough attention is that an innate ITD sensitivity could conceivably degrade over the course of prolonged exposure to the entirely inconsistent and uninformative ITDs delivered by current standard clinical CI processors. This possibility is consistent with the observations by Zheng et al., 2015 and by Litovsky and Gordon, 2016 who note that, even after binaural CI listening experience extending for >4 years or >6 years, respectively, the ITD sensitivity of bilateral CI users still lags well behind that of age-matched controls with NH. If the clinical processors supplied to these bilateral CI users do not supply high-quality ITD cues, then no amount of experience will make these patients experts in the use of ITDs. Contrast this with our NDCI-B rats, which received only highly precise and informative ITDs right from the start with no additional auditory cues, and were able to lateralize ITDs as well as their NH litter mates after only 2 weeks of training. This is admittedly a somewhat unfair comparison. Clincal CI processors are, for good reason, designed first and foremost for the purpose of delivering all important speech formant information in real life settings, and optimizing ITD coding was not a priority in their original design. Nevertheless, our results raise the possibility that incorporating better ITD encoding in clinical processors might lead to better binaural outcomes for future generations of CI patients.

Current CI processors produce pulsatile stimulation based on fixed rate interleaved sampling (Wilson et al., 1991; Stupak et al., 2018), which is neither synchronized to stimulus fine structure nor synchronized between ears. Furthermore, at typical clinical stimulation rates (>900 pps), CI users are not sensitive to speech envelope ITDs, as envelope ITD sensitivity requires peak shaped envelopes (Laback et al., 2004; Grantham et al., 2008; van Hoesel et al., 2009; Noel and Eddington, 2013; Laback et al., 2015). Consequently, the carrier pulses are too fast, and the envelope shapes in everyday sounds are not peaked enough, so that speech processors only ever provide uninformative envelope ITDs to the children using them (Laback et al., 2004; Grantham et al., 2008; Laback et al., 2015). Perhaps brainstem circuits of children fitted with conventional bilateral CIs simply ‘learn’ to ignore the unhelpful ITDs that are contained in the inputs they receive. This would mean that these circuits are adaptive to uninformative ITDs. In contrast, precise ITD cues at low pulse rates were essentially the only form of useful auditory input that our NDCI-B rats experienced, and they quickly learned to use these precise ITD cues. Thus, our data raise the possibility that the mammalian auditory system may develop ITD sensitivity in the absence of early sensory input and that this sensitivity may then be either refined or lost, depending on how informative the binaural inputs turn out to be.

The inability of early deaf CI patients to use ITDs may thus be somewhat similar to conditions such as amblyopia or failures of stereoscopic depth vision development, pathologies that are caused more by unbalanced or inappropriate inputs than by a lack of sensory experience (Levi et al., 2015). For the visual system, it has been shown that orientation selective neuronal responses exist at eye-opening and thus are established without visual input (Ko et al., 2013). If this hypothesis is correct, then it may be possible to ‘protect’ ITD sensitivity in young bilateral CI users by exposing them to regular periods of precise ITD information from the beginning of binaural stimulation. Whether CI patients are able to recover normal ITD sensitivity much later if rehabilitated with useful ITDs for prolonged periods, or whether their ability to process microsecond ITDs atrophies irreversibly, is unknown and will require further investigation.

While these interpretations of our findings would lead us to argue that bilateral CI processing strategies may need to change to make microsecond ITD information available to CI patients, one must nevertheless acknowledge the difficulty in changing established CI processing strategies. The CIS paradigm (Wilson et al., 1991) from which most processor algorithms are derived times the stimulus pulses, so that only one electrode channel delivers a pulse at any one time. This has been shown to minimize cross-channel interactions due to ‘current spread’ which might compromise the already quite limited tonotopic place coding of CIs. Additionally, CI processors run at high pulse rates (≥900 Hz), which may be necessary to encode sufficient amplitude modulations for speech recognition (Loizou et al., 2000). However, ITD discrimination has been shown to deteriorate when pulse rates exceeded a few hundred Hz in humans (van Hoesel, 2007; Laback et al., 2007) and animals (Joris and Yin, 1998; Chung et al., 2016). Our own behavioral experiments described here were conducted with low pulse rates (50 Hz), and future work will need to determine whether ITD discrimination performance declines at higher pulse rates which would make pulse rate an important factor for the development of good ITD sensitivity under this stimulation conditions. Thus, designers of novel bilateral CI speech processors may face conflicting demands: They must invent devices that fire each of 20 or more electrode channels in turn, at rates that are both fast, so as to encode the speech envelope in fine detail, but also slow, so as not to overtax the brainstem’s ITD extraction mechanisms, and they must make the timing of at least some of these pulses encode stimulus fine structure and ITDs. While difficult, this may not be impossible, and promising, research is underway, which either provides fine structure information on up to four apical electrodes while running CIS strategy on the remaining electrodes (MED-EL CIs; Riss et al., 2014), uses a mixture of different pulse rates for different electrode channels (Thakkar et al., 2018), presents redundant temporal fine structure information to multiple electrode channels (Churchill et al., 2014), or aims to ‘reset’ the brain’s ITD extraction mechanisms by introducing occasional ‘double pulses’ into the stimulus (Srinivasan et al., 2018). A detailed discussion is beyond the scope of this article. Our results underscore the need to pursue this work with urgency as we have provided evidence that the absence of auditory input during a critical period does not necessarily mean that early deafened CI users show poor or no ITD sensitivity.

On a final note, we would be remiss if we did not acknowledge that, while the ‘maladaptive plasticity hypothesis’ that we have elaborated over the last few paragraphs is ‘compatible’ with the experimental data we have presented, it would be wrong to assert that our data so far prove that this hypothesis is correct. At present, we have merely managed to shed serious doubts on the popular critical period hypothesis, but at present, the maladaptive plasticity hypothesis is still, apart from others including different etiologies of deafness, just one possible alternative explanation for the observed poor ITD sensitivity of human bilateral CI users. It still needs to be put to the test by measuring the effect of deliberately degrading the quality of ITD cues to varying extent and over various periods. However, the animal model introduced in this study now makes this important task experimentally feasible.

Materials and methods

All procedures involving experimental animals reported here were approved by the Department of Health of Hong Kong (#16–52 DH/HA and P/8/2/5) and/or the Regierungspräsidium Freiburg (#35–9185.81/G-17/124), as well as by the appropriate local ethical review committee. A total of 14 rats were obtained for this study from the breeding facilities of the Chinese University of Hong Kong or from Janvier Labs (Le Genest-Saint-Isle, France), and these were allocated randomly to the deafened and hearing experienced cohorts described in Figure 1.

Deafening

Request a detailed protocol

Rats were deafened by daily intraperitoneal (i.p.) injections of 400 mg/kg kanamycin from postnatal days 9 to 20 inclusively (Osako et al., 1979; Rosskothen-Kuhl and Illing, 2012). This is known to cause widespread death of inner and outer hair cells (Osako et al., 1979; Matsuda et al., 1999; Argence et al., 2008) while keeping the number of spiral ganglion cells comparable to that in untreated control rats (Osako et al., 1979; Argence et al., 2008). Osako et al., 1979 have shown that rat pups treated with this method achieve hearing thresholds around 70 dB for only a short period (~p14–16) and are severely to profoundly hearing impaired thereafter, resulting in widespread disturbances in the histological development of their central auditory pathways, including a nearly complete loss of tonotopic organization (Rosskothen-Kuhl and Illing, 2012; Rauch et al., 2016; Rosskothen-Kuhl et al., 2018). We verified that this procedure provoked profound hearing loss (>90 dB) by the loss of Preyer’s reflex (Jero et al., 2001), the absence of ABRs to broadband click stimuli (Figure 5b) as well as pure tones (at 500, 1000, 2000, and 8000 Hz), and by performing histological assessment on cochlea sections of 11 weeks old, ND rats (data not shown). ABRs were measured as described in Rosskothen-Kuhl et al., 2018 under ketamine (80 mg/kg) and xylazine (12 mg/kg) anesthesia each ear was stimulated separately through hollow ear bars with 0.5 ms broadband clicks with peak amplitudes up to 130 dB sound pressure level (SPL) delivered at a rate of 43 Hz. ABRs were recorded by averaging scalp potentials measured with subcutaneous needle electrodes between mastoids and the vertex of the rat’s head over 400 click presentations. While normal rats typically exhibited click ABR thresholds near 30 dB SPL (Figure 5a), deafened rats had very high click thresholds of ≥130 dB SPL (Figure 5b).

Examples of brainstem recordings to verify normal hearing or loss of hearing function as well as the symmetrical placement of cochlear implants (CIs).

Each recording is from a single animal. Panels (b) and (c) come from the same animal pre- and post-CI implantation. (a) Auditory brainstem responses of an acoustically stimulated normal hearing (NH) rat. ABRs are symmetric for both ears and show clear differentiation. (b) ABRs of a neonatally deafened (ND) rat. No hearing thresholds were detectable up to 130 dB SPL. (c) Electrically evoked ABRs under CI stimulation of a deafened rat. Each sub-panel includes measurements for the left and the right ears, respectively, under acoustic (a, b) or electric stimulation (c). In (c), the first millisecond (electrical stimulus artifact) is blanked out.

CI implantation, stimulation, and testing

Request a detailed protocol

All animals were implanted in early adulthood (between 10 and 14 weeks postnatally) for both behavioral training and electrophysiology recordings (Figure 1). All surgical procedures, including CI implantation and craniotomy, were performed under anesthesia induced with i.p. injection of ketamine (80 mg/kg) and xylazine (12 mg/kg). For maintenance of anesthesia during electrophysiological recordings, a pump delivered an i.p. infusion of 0.9% saline solution of ketamine (17.8 mg/kg/h) and xylazine (2.7 mg/kg/h) at a rate of 3.1 ml/h. During the surgical and experimental procedures, the body temperature was maintained at 38°C using a feedback-controlled heating pad (RWD Life Sciences, Shenzhen, China). Further detailed descriptions of our cochlear implantation methods can be found in previous studies (Rosskothen et al., 2008; Rosskothen-Kuhl and Illing, 2010; Rosskothen-Kuhl and Illing, 2012; Rosskothen-Kuhl and Illing, 2014; Rosskothen-Kuhl and Illing, 2015).

In short, two to four rings of an eight-channel electrode carrier (Cochlear Ltd animal array ST08.45, Cochlear Ltd, Peira, Beerse, Belgium) were fully inserted through a cochleostomy in medio-dorsal direction into the middle turn of both cochleae. Electrically evoked ABRs (EABRs) were measured for each ear individually to verify that both CIs were successfully implanted and operated at acceptably low electrical stimulation thresholds, usually around 100 μA with a duty cycle of 61.44 µs positive, 40.96 µs at zero, and 61.44 µs negative (Figure 5c). EABR recording used isolated biphasic pulses (see below) with a 23 ms inter-pulse interval. EABR mean amplitudes were determined by averaging scalp potentials over 400 pulses for each stimulus amplitude. For electrophysiology experiments, EABRs were also measured immediately before and after IC recordings, and for the chronically implanted rats, EABRs were measured once a week under anesthesia to verify that the CIs functioned properly and stimulation thresholds were stable.

Electric and acoustic stimuli

Request a detailed protocol

The electrical stimuli used to examine the animals’ EABRs, the physiological, and the behavioral ITD sensitivity were generated using a Tucker-Davis Technology (TDT, Alachua, FL) IZ2MH programmable constant current stimulator at a sample rate of 48,828.125 Hz. The most apical ring of the CI electrode served as stimulating electrode, the next ring as ground electrode. All electrical intracochlear stimulation used biphasic current pulses similar to those used in clinical devices (duty cycle: 61.44 µs positive, 40.96 µs at zero, 61.44 µs negative), with peak amplitudes of up to 300 μA, depending on physiological thresholds or informally assessed behavioral comfort levels (rats will scratch their ears frequently, startle or show other signs of discomfort if stimuli are too intense). For behavioral training, we stimulated all NDCI-B rats 6 dB above these thresholds.

Calibration measurements for electric ITD stimuli were performed by connecting the stimulator cable to 10 kΩ resistors instead of the in vivo electrodes and recording voltages using a Tektronix MSO 4034B oscilloscope with 350 MHz and 2.5 GS/s. The stimulator was programmed to produce biphasic rectangular stimulus pulses with a 20 µA amplitude (y-axis) and a 20.5 µs interval between the positive and the negative phases. Measured calibration pulses such as those shown in Figure 2—figure supplement 1c were used to verify that electric ILDs were negligible and did not vary systematically with ITD. ILDs were computed as the difference in root mean square power of the signals in Figure 2—figure supplement 1d. These residual ILDs produced by device tolerances in our system are not only an order of magnitude smaller than the ILD thresholds for human CI subjects reported in the literature (~0.1 dB; van Hoesel and Tyler, 2003), they also do not covary with ITD. We can therefore be certain that sensitivity to ILDs cannot account for our behavior data. Acoustic stimuli used to measure behavioral ITD sensitivity in NH-B rats consisted of a single sample pulse (generated as a digital delta function ‘click’) at a sample rate of 48,000 Hz. Acoustic stimuli were presented via a Raspberry Pi three computer connected to a USB sound card (StarTech.com, Ontario Canada, part # ICUSBAUDIOMH), amplifier (Adafruit stereo 3.7W class D audio amplifier, New York City, NY, part # 987), and miniature high-fidelity headphone drivers (GQ-30783–000, Knowles, Itasca, IL), which were mounted on hollow tubes. The single sample pulse stimuli resonated in the tube phones to produce acoustic clicks, which decayed exponentially over a couple of millisecond (see Figure 2—figure supplement 2d). Stimuli were delivered at sound intensities of ≈80 dB SPL. A 3D printed ‘rat acoustical manikin’ with miniature microphones in each ear canal was used for validating that the acoustic setup delivered the desired ITDs and no usable intensity cues (see Figure 2—figure supplement 2 and Li et al., 2019). Note that the residual ILDs are much smaller than the reported behavioral thresholds for ferrets (~1.3 dB Keating et al., 2014) or rats (~3 dB Wesolek et al., 2010). We can therefore be certain that sensitivity to ILDs cannot account for our behavior data.

To produce electric or acoustic stimuli of varying ITDs spanning the rat’s physiological range of ±130 µs (Koka et al., 2008), stimulus pulses of identical shape and amplitude were presented to each ear, with the pulses in one ear delayed by an integer number of samples. Given the sample rates of the devices used, ITDs could thus be varied in steps of 20.48 µs for the electrical stimuli and 20.83 µs for the acoustic stimuli.

Animal psychoacoustic testing

Request a detailed protocol

We trained our rats on 2AFC sound lateralization tasks using methods similar to those described in Walker et al., 2009; Bizley et al., 2013; Keating et al., 2013a; Li et al., 2019. The behavioral animals were put on a schedule with 6 days of testing, during which the rats obtained their drinking water as a positive reinforcer, followed by 1 day off, with ad lib water. The evening before the next behavioral testing period, drinking water bottles were removed. During testing periods, the rats were given two sessions per day. Each session lasted 25–30 min, which typically took 150–200 trials during which ≈10 ml of water were consumed.

One of the walls of each behavior cage was fitted with three brass water spouts, mounted ≈6–7 cm from the floor, and separated by ≈7.5 cm (Figure 2—figure supplement 1a-b; Figure 2—figure supplement 2a-c). We used one center ‘start spout’ for initiating trials and one left and one right ‘response spout’ for indicating whether the stimulus presented during the trial was perceived as lateralized to that side. Contact with the spouts was detected by capacitive touch detectors (Adafruit Industries, New York City, NY, part # 1362). Initiating a trial at the center spout triggered the release of a single drop of water through a solenoid valve. Correct lateralization triggered three drops of water as positive reinforcement. Incorrect responses triggered no water delivery but caused a 5–15 s timeout during which no new trial could be initiated. Timeouts were also marked by a negative feedback sound for the NH-B rats. Given that CI stimulation can be experienced as effortful by human patients (Perreau et al., 2017), and to avoid potential discomfort from prolonged negative feedback stimuli, the NDCI-B rats received a flashing LED as an alternative negative feedback stimulus. The LED was housed in a sheet of aluminum foil both to direct the light forwards and to ground the light to the setup. After each correct trial a new ITD was chosen randomly from a set spanning ±160 μs in 25 µs steps, but after each incorrect trial, the last stimulus was repeated in a ‘correction trial’. Correction trials prevent animals from developing idiosyncratic biases favoring one side (Walker et al., 2009; Keating et al., 2014), but since they could be answered correctly without attention to the stimuli by a simple ‘if you just made a mistake, change side’ strategy, they are excluded from the final psychometric performance analysis.

The NH-B rats received their acoustic stimuli through stainless steel hollow ear tubes placed such that, when the animal was engaging the start spout, the tips of the tubes were located right next to each ear of the animal to allow near-field stimulation (Figure 2—figure supplement 2a). The pulses resonated in the tubes, producing pulse-resonant sounds, resembling single-formant artificial vowels with a fundamental frequency corresponding to the 50 Hz click rate. Note that this mode of sound delivery is thus very much like that produced by ‘open’ headphones, such as those commonly used in previous studies on binaural hearing in humans and animals, for example (Wightman and Kistler, 1992; Keating et al., 2013a). We used a 3D printed ‘rat acoustical manikin’ with miniature microphones in the ear canals (Figure 2—figure supplement 2c). It produced a channel separation between ears of ≥20 dB at the lowest, fundamental frequency, and around 40 dB overall. Further details on the acoustic setup and procedure are described in Li et al., 2019. The NDCI-B rats received their auditory stimulation via bilateral CIs described above, connected to the TDT IZ2MH stimulator via a custom-made, head mounted connector and commutator, as described in Rosskothen-Kuhl and Illing, 2014.

Multi-unit recording from IC

Request a detailed protocol

Immediately following bilateral CI implantation, anesthetized NDCI-E rats were head fixed in a stereotactic frame (RWD Life Sciences), craniotomies were performed bilaterally just anterior to lambda. A single-shaft, 32-channel silicon electrode array (ATLAS Neuroengineering, E32-50-S1-L6) was inserted stereotactically into the left or right IC through the overlying occipital cortex using a micromanipulator (RWD Life Sciences). Extracellular signals were sampled at a rate of 24.414 kHz with a TDT RZ2 with a NeuroDigitizer head-stage and BrainWare software. Our recordings typically exhibited short response latencies (≈ 3–5 ms), which suggests that they may come predominantly from the central region of IC. Responses from non-lemniscal sub-nuclei of IC have been reported to have longer response latencies (≈ 20 ms; Syka et al., 2000).

At each electrode site, we first measured neural rate/level functions, varying stimulation currents in each ear to verify that the recording sites contained neurons responsive to cochlear stimulation, and to estimate threshold stimulus amplitudes. Thresholds rarely varied substantially from one recording site to another in any one animal. We then measured ITD tuning curves by presenting single pulse binaural stimuli with equal amplitude in each ear, ≈10 dB above the contralateral ear threshold, in pseudo-random order. ITDs varied from 163.84 μs contralateral ear leading to 163.84 μs ipsilateral ear leading in 20.48 μs steps. Each ITD value was presented 30 times at each recording site. The inter-stimulus interval was 500 ms. At the end of the recording session, the animals were overdosed with pentobarbitone.

Data analysis

Request a detailed protocol

To quantify the extracellular multi-unit responses, we calculated the average activity for each stimulus over a response period (3–80 ms post-stimulus onset) as well as baseline activity (300–500 ms after stimulus onset) at each electrode position. The first 2.5 ms post-stimulus onset was dominated by electrical stimulus artifacts and were discarded. For display purposes of the raster plots in Figure 3, we extracted multi-unit spikes by simple threshold crossings of the band-passed (300 Hz–6 kHz) electrode signal with a threshold set at 4 standard deviation of the signal amplitude. To quantify responses for tuning curves, instead of counting spikes by threshold crossings, we instead computed an analog measure of multi-unit activity (AMUA) amplitudes as described in Schnupp et al., 2015. The mean AMUA amplitude during the response and baseline periods was computed by band-passing (300 Hz–6 kHz), rectifying (taking the absolute value) and low-passing (6 kHz) the electrode signal. This AMUA value thus measures the mean signal amplitude in the frequency range in which spikes have energy. As illustrated in Figure 1 of Schnupp et al., 2015, this gives a less noisy measure of multi-unit neural activity than counting spikes by conventional threshold crossing measures because the latter are subject to errors due to spike collisions, noise events, or small spikes sometimes reach threshold and sometimes not. The tuning curves shown in the panels of Figure 3 are the normalized responses from this AMUA measure averaged across 30 trials for each ITD seen by each of the dots per vertical panel in the raster plots where each panel is an ITD and each dot is a spike. Changes in the AMUA amplitudes tracked changes in spike density.

STVR calculation

Request a detailed protocol

STVR values are a measure of the strength of tuning of neural responses to ITD, which we adopted from Hancock et al., 2010 to facilitate quantitative comparisons. The STVR is defined in Hancock et al., 2010 as the proportion of trial-to-trial variance in response amplitude explained by changes in ITD. The STVR is calculated by computing a one-way ANOVA of responses grouped by ITD value and dividing the total sum of squares by the group sum of squares. This yields values between 0 (no effect of ITD) and 1 (response amplitudes completely determined by ITD). p-Values were also computed from the one-way ANOVA and p<0.01 served as a criterion to determine whether the ITD tuning of a given multi-unit was deemed statistically significant. The number of responses for each ITD value was 30, yielding with a degree of freedom for the ANOVA of 29.

Psychometric curve fitting

Request a detailed protocol

In order to derive summary statistics that could serve as measures of ITD sensitivity from the thousands of trials performed by each animal, we fitted psychometric models to the observed data. It is common practice in human psychophysics to fit performance data with cumulative Gaussian functions (Wickens, 2002; Schnupp et al., 2005). This practice is well motivated in signal detection theory, which assumes that the perceptual decisions made by the experimental subject are informed by sensory signals, which are subject to multiple, additive, and hence approximately normally distributed sources of noise. When the sensory signals are very large relative to the inherent noise, then the task is easy and the subject will make the appropriate choice with near certainty. For binaural cues closer to threshold, the probability of choosing the ‘right’ spout (pR) can be modeled by the function

(1) pR=Φ(ITDα)

where Φ is the cumulative normal distribution, ITD denotes the interaural time difference (arrival time at left ear minus arrival time at right ear, in ms), and α is a sensitivity scale parameter that captures how big a change in the proportion of ‘right’ choices a given change in ITD can provoke, with units of 1/ms.

Functions of the type in Equation 1 tend to fit psychometric data for 2AFC tests with human participants well, where subjects can be easily briefed and lack of clarity about the task, lapses of attention, or strong biases in the perceptual choices are small enough to be explored. However, animals have to work out the task for themselves through trial and error and may spend some proportion of trials on ‘exploratory guesses’ rather than direct perceptual decisions. If we denote the proportion of trials during which the animal makes such guesses (the ‘lapse rate’) by γ, then the proportion of trials during which the animal’s responses are governed by processes, which are well modeled by Equation 1, is reduced to (1−γ). Furthermore, animals may exhibit two types of bias: an ‘ear bias’ and a ‘spout bias’. An ‘ear-bias’ exists if the animal hears the midline (50% right point) at ITD values that differ from zero by some small value β. A ‘spout bias’ exists if the animal has an idiosyncratic preference for one of the two response spouts or the other, which may increase its probability of choosing the right spout by δ (where δ can be negative if the animal prefers the left spout). Assuming the effect of lapses, spout, and ear bias to be additive, we therefore extended Equation 1 to the following psychometric model:

(2) pR=Φ(ITDα+β)(1γ)+γ2+δ

We fitted the model in Equation 2 to the observed proportions of ‘right’ responses as a function of stimulus ITD using the scipy.optimize.minimize() function of Python 3.4, using gradient descent methods to find maximum likelihood estimates for the parameters α, β, γ, and δ given the data. This cumulative Gaussian model fitted the data very well, as is readily apparent in Figure 2a–j. We then used the slope of the psychometric function around zero ITD as our maximum likelihood estimate of the animal’s ITD sensitivity, as plotted in Figure 2k. That slope is easily calculated using the Equation 3

(3) slope=φ(0)α(1γ)

which is obtained by differentiating Equation 1 and setting ITD = 0. φ(0) is the Gaussian normal probability density at zero (≈0.3989).

Seventy-five percent of correct thresholds were computed as the mean absolute ITD at which the psychometric dips below 25% or rises above 75% ‘right’ responses, respectively.

Data availability

All data generated or analysed during this study are included in the manuscript and supporting files. Data have been deposited to Dryad, under the https://doi.org/10.5061/dryad.573n5tb6d .

The following data sets were generated

References

  1. Book
    1. Arndt S
    2. Laszig R
    3. Aschendorff A
    4. Beck R
    5. Susan B
    6. Waltzman J
    (2014)
    Expanding Criteria for the Evaluation of Cochlear Implant Candidates
    Thieme Medical Publishers.
  2. Book
    1. Illing R-B
    2. Rosskothen-Kuhl N
    (2012)
    The Cochlear Implant in Action: Molecular Changes Induced in the Rat Central Auditory System
    Freiburg: INTECH Open Access Publisher.
  3. Book
    1. Wickens TD
    (2002)
    Elementary Signal Detection Theory
    Oxford University Press.
    1. Zwislocki J
    2. Feldman RS
    (1956) Just noticeable dichotic phase difference
    The Journal of the Acoustical Society of America 28:152–153.
    https://doi.org/10.1121/1.1918072

Decision letter

  1. Lina Reiss
    Reviewing Editor; Oregon Health and Science University, United States
  2. Barbara G Shinn-Cunningham
    Senior Editor; Carnegie Mellon University, United States
  3. Lina Reiss
    Reviewer; Oregon Health and Science University, United States

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This is the first study to demonstrate near-normal interaural time difference (ITD) thresholds using both behavioral tasks and acute electrophysiological recordings in an early-deafened animal model. This finding contrasts with the poor ITD sensitivity in early-deafened human patients with bilateral cochlear implants, suggesting that deafness and atrophy of neural pathways may not by themselves cause poor ITD sensitivity. This raises the possibility that in human patients, other factors such as exposure to bilateral stimulation with unsynchronized CI devices may cause maladaptive plasticity and interfere with the ability to use ITD cues.

Decision letter after peer review:

Thank you for submitting your article "Microsecond Interaural Time Difference Discrimination Restored by Cochlear Implants After Neonatal Deafness" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Lina Reiss as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Barbara Shinn-Cunningham as the Senior Editor.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

As the editors have judged that your manuscript is of interest, but as described below that additional experiments may be required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

Summary:

All three reviewers were generally positive and in favor of publication after attention to a large number of specific details. The reviewers also noted and agreed that the conclusions need to be toned down regarding the role of maladaptive stimulation, which received a great deal of attention in the manuscript, but is not sufficiently supported by the existing data. There are additional experiments that would need to be done to support that claim, which are beyond the scope of the current manuscript. While this should be toned down, the findings do raise important questions about the role of maladaptive plasticity, and more clarity is needed in the discussion of the roles of deafness versus experience in the development of ITD sensitivity.

In addition the reviewers felt that results are quite surprising given the previous findings in similar animal studies. More details of the current experiments and discussion of and comparison to previous published work in animals and humans are needed to understand the differences in the findings. For instance, more details on animal groups as noted by reviewer 2 and consideration of the role of implantation in the middle rather than basal turn of the cochlea, as well as the roles of human etiologies noted by reviewer 1.

In addition, more precise discussion of previous literature on ITD sensitivity is needed as noted by reviewer 2, as well as a thorough discussion of critical period and the relationship to ITD sensitivity as noted by reviewer 3. The authors should also clearly state the hypotheses of the study as noted by reviewer 3.

Reviewer #1:

This is an interesting study that examines whether neonatally deafened animals can learn to use ITD cues provided by bilateral cochlear implants.

This is the first study to investigate this with animal training as well as acute electrophysiological recordings as previously done in cats and rabbits. The finding of near-normal ITD sensitivity in these animals, which contrasts with findings in human patients with bilateral cochlear implants, raises important questions about the role of maladaptive plasticity due to previous hearing aid or unsynchronized CI use, as opposed to just atrophy due to deafness alone. At the very least, the findings suggest that ITD coding is possible to restore even if deafening occurred early in the critical period.

Another way that this study differs from the previous animal studies is implantation of the electrode array in the middle rather than basal turn of the cochlea, which may allow current to better reach lower frequency neurons that are better represented in ITD pathways. This may also explain the improved ITD coding observed in the ICC compared to the previous findings in cats.

While the proposed role of maladaptive plasticity is compelling, additional proof would be needed to confirm this, such as the converse experiment in which ITD sensitivity is measured after exposure to unsynchronized pulse trains. There are other explanations for poor ability to use ITD cues in humans, including different etiologies of deafness in those deafened as children (usually genetic, not chemical), which may affect neural coding and synchronization. Some people with congenital deafness have zero auditory experience, unlike the rats in this study which have some minimal auditory experience before deafening. Thus, the converse experiment is necessary to address these potential alternative explanations, and should be discussed.

Some additional methodology concerns:

1) ABRs were measured using clicks, not tones. For rigor, can you provide data from one animal using tones for ABR, to verify complete hearing loss at all frequencies, especially low frequencies which are less vulnerable to chemical deafening? Verification is more important for this study than for the previous studies which placed the implant in the basal part of the cochlea, near the region of deafening.

2) The level differences in Figure 2—figure supplement 1D show that there was a higher ILD for +100 than -100 microseconds. Why did this occur, and why wasn't this cue eliminated? This leaves open the possibility that rats could use this cue instead of ITD. Please also show the calibration results for the intermediate ITDs, not just -100 and +100.

3) Why were the NH rats stimulated with two tubes instead of a single loudspeaker? Wouldn't this lead to something like the precedence effect, and complicate interpretation of the NH data? The rationale should be discussed (I presume this is to control for ILD).

Reviewer #2:

This manuscript reports behavioral and physiological sensitivity of neonatally deafened rats to interaural timing differences (ITD) in cochlear implant stimulation. The reported thresholds, around 50 microsec, are remarkably short, certainly shorter than observations in pre-lingually deaf humans and considerably shorter than most physiological results in other animals. Indeed, I was initially inclined to question the result, but the experimental design and conduct of the experiments seem solid. The over-zealous tone of the presentation detracts from the impact of the study.

The authors tend to overstate the poor ITD sensitivity of cochlear implant (CI) users. In the Introduction, for instance, states that "ITD sensitivity of CI patients is poor or completely missing". This might be more-or- less true of pre-lingually deaf CI patients. It is not correct, however, for post-lingually deaf patients, and the authors don't make that distinction here. Figure 2 of Laback et al., 2011, for instance, shows that the median ITD threshold for post-lingual patients is between 100 and 200 microsec and that post-lingual patients with no ITD sensitivity are rare (about 9% in that figure). Thresholds of 100-200 microsec are comparable to those of normal hearing human listeners detecting interaural delays in sound envelopes. Again, in subsection “Early deaf CI rats discriminate ITD as accurately as their normally hearing litter mates”, I think that it is not correct that pre-lingual CI patients "usually" have ITD thresholds "too large to measure", although I would accept "often". It is important to note that ITD sensitivity in CI patients declines as pulse rates are increased above about 50 pps, and clinical processors run at >900 pps. The present study used single bilateral pulse pairs for physiology and relatively low (50 pps) pulse rates for behavior. For that reason, the present results in rats cannot be compared with human results obtained with clinical processors, only with those made with laboratory processors.

There are as many as 5 experimental groups of rats that are lumped together as neonatally deafened (ND) and normal hearing (NH). I count the following groups:

1) Neonatally deafened, trained and tested for behavior

2) Neonatally deafened, tested with inferior colliculus (IC) recording

3) Raised with normal hearing, trained and tested for behavior using sound stimulation

4) Raised with normal hearing, tested with IC recording and sound stimulation

5) Raised with normal hearing, deafened as adults (I think), implanted with ICs, and tested with IC recording – These animals are referred as "NH rats", the same as to groups 3 and 4.

We need to know if groups 1 and 2 are the same rats. If so, what was the history of electrical stimulation prior to IC recording? That is, did they get just 2 25-30-minute sessions a day of stimulation for about 14 days, or did they receive some other electrical stimulation? Were group 5 rats deafened, implanted, and recorded all the same day, or did time elapse between implantation and IC recording? Were those rats deafened and, if so, how? We need a different label to distinguish group 5 from groups 3 and 4. The caption of Figure 3 refers to "hearing experienced" rats and cats, which I think is a useful term. Still, the presentation of Figure 3 in the text refers only to "NH", and it is not obvious that the data in Figure 3 are, I guess, from rats that were raised with normal hearing, then deafened, implanted, and stimulated electrically as young adults.

Reviewer #3:

This paper has significance in the fields of Developmental Biology and Neuroscience and a warrants publication in eLife. The aim of the current study, using behavioral and physiological evidence, was to determine if interaural time difference (ITD) thresholds can be measured in neonatally deafened rats when implanted with cochlear implants in adulthood. In addition, the rats were provided with rigorous training to ITDs. A secondary aim of the study was to debunk the idea of an early "critical period" for learning ITDs. The study found that neonatally deafened rats were able to behaviorally respond and learn to use ITDs with exquisite precision even when no input is provided during an early "critical period" and that this was due to the presentation of consistent, salient ITDs after cochlear implantation.

As a basic science question, the work in this study provides an excellent animal model for future behavioral testing in binaural tasks. In addition, the methods and implementation are entirely sound and the work is well-motivated, and falls within the scope of this journal. Clinically, there is a need in the field to understand the impact of bilateral implantation on the developing auditory system and its ability to provide the necessary cues for sound localization. This study provides a good stepping stone for future developmental studies that would like to probe whether better processing strategies, or early intervention with cochlear implants, is the more pressing issue for ascertaining ITD sensitivity. However, there were major logical flaws or gaps in the manuscript. These could easily be fixed with changes to the language in the Introduction and Discussion. These major issues are outlined below:

1) The concept of the "critical period" is not well-defined. More of the literature on the relationship between critical periods and ITD sensitivity should be surveyed in order to understand what the authors are referring to in the context of the present study. First, it would be very helpful to the reader to be able to visualize, with a schematic or timeline, how the animals in the current study were, deafened, implanted, and subsequently trained with ITDs, compared with other studies. Second, what is this nature of this critical period? Is it a period for development of a skill, a vulnerable period, a period for recovery from deprivation? These examples are taken from Kral, 2013 as examples of previously published definitions. I think it might be in the authors' benefit to clearly define what this critical period represents so that the interpretation of the results can be easily understood.

2) Throughout the Introduction and Discussion there is an implicit suggestion about how deafness at an early age may not necessarily cause a loss of ITD sensitivity, and instead that CIs have limitations which could prevent learning of ITDs. However, a distinction needs to be made on what type of limitations are being debated: is it (a) the inability of CI to provide ITDs or is it the (b) exposure to inconsistent ITDs through bilateral CIs that is preventing learning of those cues? In the manuscript, the authors appear to advocate for the latter point. This distinction is further confused by the sentence in the Introduction, where the sensitivity of CI patients is mentioned, citing papers with a mix of measurements made with both clinical processors and synchronized research processors. Therefore, if the authors are suggesting that poor delivery of ITDs using unsynchronized processors is preventing learning of ITD cues in humans, then an equivalent condition with unsynchronized processors needs to be tested in the rats. This must be done in order to confidentially say that synchronized ITDs through the CI need to be conveyed in order to promote learning of binaural cues.

3) A final consideration, the authors should clearly state the hypotheses of the study. The authors have designed a unique study with two experiments, one behavioral and one physiological, whereby response lateralization and the proportion of ITD sensitive neurons were used as a metric to understand whether ND rats were able to (a) utilize ITDs and (b) possessed the underlying neural structures which reflect the behavioral data. It is imperative for the authors to state a set of hypotheses in relation to these two experiments and subsequently describe to the reader how they were carried out and measured. This means that, somewhere in the Introduction, there should be a justification for measuring response lateralization and proportion of ITD sensitive neurons/SNR for the behavioral and physiological experiments, respectively.

Introduction:

Needs a thorough literature review on the auditory critical period in relation to ITD sensitivity. The authors should also discuss why there hasn't been a rigorous study to understand critical periods and ITD sensitivity.

Needs to make distinction between early-deafened vs. late-deafened. This will help to understand the relation between human and rats.

Needs more information on training and plasticity in order to build up the question. While prior studies have shown that early deafness does impact ITD sensitivity, they do not suggest that sensitivity could not be relearned. Please describe evidence (if any) that supports the notion that lack of exposure to binaural cues in adulthood, rather than a total loss of binaural function early in life, limits sensitivity to binaural cues for early-deafened populations.

As stated earlier, a distinction should be made between synchronized delivery of binaural cues and unsynchronized bilateral implantation and which one of these is the contributing factor to improper learning or acquisition of ITDs in adulthood.

Subsection “Varying degrees and types of ITD tuning are pervasive in the neural responses in the IC of ND rats immediately after adult cochlear implantation”: The interpretation of various shapes of tuning curves aren't always known to the average reader. It would be helpful, either here or in the data analysis section, to describe what each of the curve types described about the neurons that are being recorded from.

Discussion: Please expand on the importance of ITD processing in rats and what "generally poor" means in the context of the current study.

Discussion: The logic here is a little hard to follow. The authors state that only 48% of IC neurons were found to be ITD sensitivity in ND subjects, conversely in the current study, 91% of IC neurons were ITD sensitive. The claim appears to be that because the proportion of ITD sensitive neurons was greater in the present study, that this is a hallmark of relearning of ITDs in the animal. However, nowhere in the Introduction or the hypotheses of the paper was this addressed. The hypotheses of paper need to be clearly stated such that the reader can understand what measures from the (a) behavioral and (b) physiological data constitute "good sensitivity to ITDs".

Discussion: This sentence, "It is important to remember that…congenital cats" is very long and a little difficult to follow. I think it would be helpful if the authors previously described what characterized good vs. poor tuning. It sounds like the authors are trying to make a connection between the behavioral data and the neural data, however, this is unclear.

Discussion: There is a reference to "critical periods" in each of these animal models, however, the statements made are rather vague. In each the ferret, mouse, and gerbil models, there is a reference to certain synapses and their relationship to binaural circuity prior to the onset of hearing (i.e. the "critical period"). However, the authors do not explain what happens to the circuitry, and what is required to occur (either before or after the so-called "critical period") in order to overcome any deficits in ITD processing. In addition, the authors have not explicitly stated what makes the rat model in present study unique for understanding the implications of the hypothesized "critical period."

Discussion: Authors should expand on why inhibition is important.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for submitting your article "Microsecond Interaural Time Difference Discrimination Restored by Cochlear Implants After Neonatal Deafness" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Lina Reiss as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Barbara Shinn-Cunningham as the Senior Editor.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

Summary:

The reviewers agree that the authors have made substantial improvements, especially with toning back the maladaptive plasticity hypothesis. However, there are two major remaining issues.

First, the writing of the Introduction still needs significant work, with more specific and crisper language, especially when referring to literature or ideas, such as when discussing the contribution of the critical period and related hypotheses as well as the role of training, as noted by reviewer 3. The authors also need to clarify interpretation of the rats' ability to use ITDs with exquisite precision as due to (1) provision of salient ITDs (independent of missed critical period) or (2) rigorous training and synchronized stimulation allowed rats to learn to use ITDs even though the critical period was missed.

There is also a new concern raised by reviewer 2 about the HE-CI group which the revision indicates were not chemically deafened before implantation. Specifically, there is a potential for electrophonic responses (e.g. recent work by Kral's group (Sato et al., 2016). The acoustic frequency corresponding to the parameters used in this paper would be approximately the period of the pulse, in this case 1/0.000164 or ~6000 Hz. It is not clear what tonotopic areas were recorded in the study and how electrophonic responses would affect the result. Even though the HE-CI group was given a conductive loss, this would not attenuate the signal as completely as chemical deafening, and inner/outer hair cells are likely to remain intact. and permit indirect electrophonic stimulation of inner hair cells via the outer hair cells, as well as direct electrical stimulation of inner hair cells. After discussion, all of the reviewers agreed that this is a substantial experimental confound, and also that this group does not add value to the study. Therefore, we recommend to remove this group.

https://doi.org/10.7554/eLife.59300.sa1

Author response

Summary:

All three reviewers were generally positive and in favor of publication after attention to a large number of specific details. The reviewers also noted and agreed that the conclusions need to be toned down regarding the role of maladaptive stimulation, which received a great deal of attention in the manuscript, but is not sufficiently supported by the existing data. There are additional experiments that would need to be done to support that claim, which are beyond the scope of the current manuscript. While this should be toned down, the findings do raise important questions about the role of maladaptive plasticity, and more clarity is needed in the discussion of the roles of deafness versus experience in the development of ITD sensitivity.

In addition. the reviewers felt that results are quite surprising given the previous findings in similar animal studies. More details of the current experiments and discussion of and comparison to previous published work in animals and humans are needed to understand the differences in the findings. For instance, more details on animal groups as noted by reviewer 2 and consideration of the role of implantation in the middle rather than basal turn of the cochlea, as well as the roles of human etiologies noted by reviewer 1.

In addition, more precise discussion of previous literature on ITD sensitivity is needed as noted by reviewer 2, as well as a thorough discussion of critical period and the relationship to ITD sensitivity as noted by reviewer 3. The authors should also clearly state the hypotheses of the study as noted by reviewer 3.

These are highly constructive criticism and we have substantially rewritten the Discussion to accommodate them. In particular that, while our results shed substantial doubts on the, at present prevailing, critical period hypothesis, they do not by themselves prove that the alternative maladaptive plasticity hypothesis is correct. We now end the Discussion on this important point. We have also clarified the various experimental cohorts, expanded the discussion of the human clinical literature that provides the background, and edited our Introduction as suggested by reviewer 3 to spell out our principal objective (that is, to attempt falsification of the critical period hypothesis) in the Introduction. We believe that we have been able to accommodate all the comments and criticisms of this excellent set of careful but constructive and fair minded reviews.

Reviewer #1:

This is an interesting study that examines whether neonatally deafened animals can learn to use ITD cues provided by bilateral cochlear implants.

This is the first study to investigate this with animal training as well as acute electrophysiological recordings as previously done in cats and rabbits. The finding of near-normal ITD sensitivity in these animals, which contrasts with findings in human patients with bilateral cochlear implants, raises important questions about the role of maladaptive plasticity due to previous hearing aid or unsynchronized CI use, as opposed to just atrophy due to deafness alone. At the very least, the findings suggest that ITD coding is possible to restore even if deafening occurred early in the critical period.

Another way that this study differs from the previous animal studies is implantation of the electrode array in the middle rather than basal turn of the cochlea, which may allow current to better reach lower frequency neurons that are better represented in ITD pathways. This may also explain the improved ITD coding observed in the ICC compared to the previous findings in cats.

While the proposed role of maladaptive plasticity is compelling, additional proof would be needed to confirm this, such as the converse experiment in which ITD sensitivity is measured after exposure to unsynchronized pulse trains. There are other explanations for poor ability to use ITD cues in humans, including different etiologies of deafness in those deafened as children (usually genetic, not chemical), which may affect neural coding and synchronization. Some people with congenital deafness have zero auditory experience, unlike the rats in this study which have some minimal auditory experience before deafening. Thus, the converse experiment is necessary to address these potential alternative explanations, and should be discussed.

We thank Lina Reiss for her constructive criticism of our manuscript. We agree that, at present, the “maladaptive plasticity hypothesis” we put forward is merely one of several possible explanations for the different outcomes in our rats and in human patients. This is now spelled out very clearly in the final paragraph of the Discussion. We also agree that "converse experiments" in which CI rats experience unsynchronized ITDs from the beginning of CI stimulation would be an essential follow-up, and we are already piloting such experiments.

Some additional methodology concerns:

1) ABRs were measured using clicks, not tones. For rigor, can you provide data from one animal using tones for ABR, to verify complete hearing loss at all frequencies, especially low frequencies which are less vulnerable to chemical deafening? Verification is more important for this study than for the previous studies which placed the implant in the basal part of the cochlea, near the region of deafening.

We have recorded pure tone ABRs as best we can with our current setup as requested by the reviewer, for one ND rat and one normally hearing control rat. The results are shown in Author response image 1. Please be aware that our current setup uses headphone drivers which are not suitable for low frequency stimulation at high amplitudes. Signals will distort badly at high levels and low frequencies, which is why the recordings shown here only include sound levels up to 80 dB SPL. We are aware that ideally one would test substantially louder sound levels if one wanted to be absolutely sure that the animals involved are not just moderately but severely hearing impaired across the entire frequency region, even at frequencies below 1 kHz. To achieve that we would need to retool and recalibrate our setup with low frequency transducers, and if the reviewer insists that we do so, we will, but we hope to persuade the reviewer that this should not be necessary for two reasons: Firstly, since all our behavior results are obtained with electrodes implanted in the 8 kHz region, so even if there was any residual low frequency hearing, we would have to assume that this somehow induces essentially perfect ITD sensitivity in severely deaf frequency bands a full four octaves away, which would be extremely surprising. Secondly, and even more importantly, we have recently conducted histological studies on rats deafened neonatally with our kanamycin protocol and we were able to show that the treatment leads to a complete degeneration of the organ of corti along the entire length of the cochlea. This histological evidence is shown in Author response image 1, and it precludes any possibility of residual hearing in our ND animals more effectively than ABR recordings ever could. We hope that, together, these lines of evidence will reassure the reviewer that residual low frequency hearing can be safely disregarded as an explanation for the excellent CI ITD sensitivity documented in our manuscript.

Author response image 1
Pure tone ABRs for one normally hearing (NH, top row) and one neonatally deafened (ND, bottom row) animal.

The histological assessment of the effect of the neonatal kanamycin treatment was performed as follows: we perfused and cut the cochleae of 11 weeks old (time of bilateral CI implantation) normal hearing (n=5) and neonatally deafened (n=3) rats and visualized the four Organs of Corti (basal turn, lower middle turn, upper middle turn, apical turn) by using hematoxylin and eosin staining over all 2.5 turns of the rat cochlea. In comparison to the Organ of Corti of normal hearing rats, we observed a complete loss of outer and inner hair cells over all turns of neonatally deafened rats (see Author response image 2, second row). A residual hearing ability of these animals in the low-frequency range can thus be excluded.

Author response image 2
Organs of Corti of 11 weeks old normal hearing (first row) and neonatally deafened (second row) rats.

All four Organs of Corti from base to apex are shown from left to right. While the Organs of Corti of normal hearing rats show three rows of outer hair cells (green arrow) and one row of inner hair cells (orange arrow), they are completely missing in all cochlea turns of the deafened rats. Scale bar: 20 µm.

2) The level differences in Figure 2—figure supplement 1D show that there was a higher ILD for +100 than -100 microseconds. Why did this occur, and why wasn't this cue eliminated? This leaves open the possibility that rats could use this cue instead of ITD. Please also show the calibration results for the intermediate ITDs, not just -100 and +100.

As requested by the reviewer, we have collected calibration data for additional ITD values and replaced the original Figure 2—figure supplement 1.

There are very small, artefactual ILDs observed in Figure 2—figure supplement 1D which are attributable to a tiny amount of capacitive/inductive channel crosstalk in the wires leading from the programmable stimulator to the implants. A current pushed through one wire will induce a tiny current in the wire running parallel to it by magnetic induction. If you look carefully at the traces in Figure 2C, you can see tiny little red bumps coinciding with big blue rising or falling phases and vice versa, corresponding to these induced currents. You see these bumps at the rising and falling phases because magnetic induction of currents is proportional to rate of change in field strength. The currents measured by the oscilloscope and used here for stimulus calibration are thus a superposition of the direct stimulus current injected into a given channel, plus the very much smaller induced current from the cross-talk from the neighboring channel. The direct current pulses and the cross talk current pulses can be either in phase or out of phase with each other depending on the ITD, which will lead to either constructive or destructive interference. This accounts for the small and variable ITD-induced ILD observed.

Why wasn’t this cue eliminated? Because doing so would be both hard and not worth the effort. Using higher quality shielded cables to reduce inductive coupling is only possible for part of the assembly if we want to keep the cable to the head connector manageably light and flexible. Reducing the rising slope of the pulses in order to reduce the amplitude of the magnetic induction would have led to pulse shapes that would have been atypical of those used in clinical practice. One could try to compute predictive inverse filters to model and subtract at source the magnetic induction currents in the wires, which would have to be done to very high accuracy to compensate for such small effects, but the value of the considerable effort involved would be nil given that the induced ILD “cue” is not only much smaller than the ILD threshold of rats. At 100 μs ITD, where our rats routinely achieve 80% correct or better (compare Figure 2) the ILD is as low as (-)0.018 dB and does not change sign with the ITD. The largest absolute ITD-induced ILD is 0.18 dB, or equivalently 2.17%. This is roughly an order of magnitude smaller than the rats ILD threshold. To illustrate this, the new Figure 2—figure supplement 1 contains a new panel E which shows behavioral ILD psychometric curves obtained from two additional ND-CI rats currently being tested for follow-on studies. Note that the ITD-induced ILDs shown in d also lack the systematic, monotonic relationship with ITD that would be necessary to explain away the remarkable ITD sensitivity observed in our ND-CI animals. We hope the reviewer can agree that the possibility that the animals could have used these tiny sub-threshold ILDs which lack a systematic relationship with ITD can be firmly excluded.

3) Why were the NH rats stimulated with two tubes instead of a single loudspeaker? Wouldn't this lead to something like the precedence effect, and complicate interpretation of the NH data? The rationale should be discussed (I presume this is to control for ILD).

In contrast to the study by e.g. Wesoleck et al., (2010), who presented open field sounds from loudspeakers, we were interested in presenting the sounds dichotically. A single loudspeaker would give very poor control over the actual ITDs, given the hard to quantify and control reverberation of open field sound in the test chamber. Dichotic stimulation not only gives better stimulus control, but also more closely reflects the situation in CI stimulation, where each ear is stimulated essentially independently. The ear tubes we devised for our near-field acoustic set up (see Figure 2—figure supplement 2A) achieves this by operating like a pair of “open” stereo headphones, such as those commonly used in previous studies of binaural hearing in humans and animals, (e.g. Keating et al., 2013). We validated this close field, open headphone stimulation method with a 3D printed “rat Kemar” dummy head, as shown in Figure 2—figure supplement 2D and described in further detail in Li et al., (2019). This enabled us to verify that the acoustic ITDs at the rats ears correspond faithfully to the ITDs imposed on the electrical signals sent to the headphone drivers. The precedence effect does not enter into this any more than it would with any study delivering binaural stimulation over open headphones. We have edited the main text and the legend of Figure 2—figure supplement 2 to indicate that these sound tubes operate like a set of open stereo headphones. This will hopefully clarify the rationale.

Reviewer #2:

This manuscript reports behavioral and physiological sensitivity of neonatally deafened rats to interaural timing differences (ITD) in cochlear implant stimulation. The reported thresholds, around 50 microsec, are remarkably short, certainly shorter than observations in pre-lingually deaf humans and considerably shorter than most physiological results in other animals. Indeed, I was initially inclined to question the result, but the experimental design and conduct of the experiments seem solid. The over-zealous tone of the presentation detracts from the impact of the study.

We thank reviewer 2 for this advice and have attempted to dial down the tone in our revised manuscript from over-zealous to merely excited by our results.

The authors tend to overstate the poor ITD sensitivity of cochlear implant (CI) users. In the Introduction, for instance, states that "ITD sensitivity of CI patients is poor or completely missing". This might be more-or- less true of pre-lingually deaf CI patients. It is not correct, however, for post-lingually deaf patients, and the authors don't make that distinction here. Figure 2 of Laback et al., 2011, for instance, shows that the median ITD threshold for post-lingual patients is between 100 and 200 microsec and that post-lingual patients with no ITD sensitivity are rare (about 9% in that figure). Thresholds of 100-200 microsec are comparable to those of normal hearing human listeners detecting interaural delays in sound envelopes. Again, in subsection “Early deaf CI rats discriminate ITD as accurately as their normally hearing litter mates”, I think that it is not correct that pre-lingual CI patients "usually" have ITD thresholds "too large to measure", although I would accept "often".

We thank reviewer 2 for this comment and have revised the two sentences in the Introduction accordingly. We are of course happy to tone down “usually” to “often”, or to flag the particular difficulties of pre-lingually deaf patients, who are in fact our main interest group. Nor do we wish to get into an argument about quite how poor the ITD of CI users really is. It is generally accepted that human bilateral CI users struggle with ITDs, and that this difficulty tends to be particularly pronounced in early deaf patients. That provides sufficient motivation for our research.

It is important to note that ITD sensitivity in CI patients declines as pulse rates are increased above about 50 pps, and clinical processors run at >900 pps. The present study used single bilateral pulse pairs for physiology and relatively low (50 pps) pulse rates for behavior. For that reason, the present results in rats cannot be compared with human results obtained with clinical processors, only with those made with laboratory processors.

Reviewer 2 is of course quite correct that pulse rate matters (we are currently conducting follow on studies documenting this in our animal model), and we now have incorporated this point in our revised Discussion. Indeed, there are many additional factors that would need to be taken into account if one wished to formally compare our animal results against those obtained with human patients. But here we are not trying to make a detailed, formal comparison between our animal results and any one particular human study or patient group. Instead, we hope the reviewer will concede that the behavioral performance exhibited by our ND rats is clearly surprisingly good compared with any human patient performance data published so far.

There are as many as 5 experimental groups of rats that are lumped together as neonatally deafened (ND) and normal hearing (NH). I count the following groups:

1) Neonatally deafened, trained and tested for behavior

2) Neonatally deafened, tested with inferior colliculus (IC) recording

3) Raised with normal hearing, trained and tested for behavior using sound stimulation

4) Raised with normal hearing, tested with IC recording and sound stimulation

5) Raised with normal hearing, deafened as adults (I think), implanted with ICs, and tested with IC recording – These animals are referred as "NH rats", the same as to groups 3 and 4

We thank reviewer 2 for the important hint that the experimental groups of our study were not as clearly defined and named as one might wish. We have remedied this in the revised paper and added a new figure to the revised manuscript (see new Figure 1) which provides a simple overview of the four different experimental groups and the experimental pipeline they followed. Our study does not include any group of acoustically stimulated rats with IC multi-unit recording. Both groups with IC recording were bilaterally supplied with CIs immediately before the start of the IC measurements and thus electrically stimulated during the IC recording to minimize the number of variables. In fact, the only difference between the groups was their hearing experience: neonatally deafened (ND) versus normal hearing until CI implantation (now termed as hearing experienced (HE)). Please see group 3 and 4 in the new Figure 1.

We need to know if groups 1 and 2 are the same rats. If so, what was the history of electrical stimulation prior to IC recording? That is, did they get just 2 25-30-minute sessions a day of stimulation for about 14 days, or did they receive some other electrical stimulation?

The animals undergoing electrophysiology recordings were not the same as those undergoing behavioural testing, but were their littermates. The animals used for physiology had no experience of CI stimulation prior to the recording session. Those used for behaviour were only stimulated during their twice daily training and testing sessions. The new Figure 1 and clarifications in subsection “Varying degrees and types of ITD tuning are pervasive in the neural responses in the IC of ND rats immediately after adult cochlear implantation” of the revised paper will hopefully resolve any confusion about the stimulation history of any group of animals described in the paper.

Were group 5 rats deafened, implanted, and recorded all the same day, or did time elapse between implantation and IC recording? Were those rats deafened and, if so, how?

The animals described as “Group 5” by reviewer 2 correspond to what we now call group HECI-E (see new Figure 1). These animals were normally hearing until bilateral CI implantation, and thus hearing experienced. They were not deafened, and IC recordings began effectively immediately after conclusion of the implantation of CI and recording electrodes. The whole experiment was performed in one day. We added the following information to our revised manuscript (subsection “Varying degrees and types of ITD tuning are pervasive in the neural responses in the IC of ND rats immediately after adult cochlear implantation”). These animals were not chemically deafened. However, during CI implantation, their tympanic membrane was perforated and, the middle ear ossicles removed (which will immediately raise hearing thresholds by 50 dB (Illing and Michler, 2001)), the cochlea was opened above the middle turn, the CI array inserted, and then the middle ear and auditory canal were filled with agar. This would have dramatically impaired any residual hearing. Also, physiological responses were recorded in a sound attenuated chamber in the absence of any acoustic stimulation. We now added this information to the subsection “Multi-unit recording from IC”.

We need a different label to distinguish group 5 from groups 3 and 4. The caption of Figure 3 refers to "hearing experienced" rats and cats, which I think is a useful term. Still, the presentation of Figure 3 in the text refers only to "NH", and it is not obvious that the data in Figure 3 are, I guess, from rats that were raised with normal hearing, then deafened, implanted, and stimulated electrically as young adults.

We thank reviewer 2 for this hint and have now introduced four new abbreviations for the four experimental groups of our study (see new Figure 1) and used them in our revised manuscript and the Figures to be able to distinguish clearly between the different groups. reviewer 2 has correctly understood Figure 3, the rats were normal hearing and therefore hearing experienced until the time of bilateral CI implantation which induced acute deafness. The labels of Figure 4 have been corrected as per the new Figure 1.

Reviewer #3:

This paper has significance in the fields of Developmental Biology and Neuroscience and a warrants publication in eLife. The aim of the current study, using behavioral and physiological evidence, was to determine if interaural time difference (ITD) thresholds can be measured in neonatally deafened rats when implanted with cochlear implants in adulthood. In addition, the rats were provided with rigorous training to ITDs. A secondary aim of the study was to debunk the idea of an early "critical period" for learning ITDs. The study found that neonatally deafened rats were able to behaviorally respond and learn to use ITDs with exquisite precision even when no input is provided during an early "critical period" and that this was due to the presentation of consistent, salient ITDs after cochlear implantation.

As a basic science question, the work in this study provides an excellent animal model for future behavioral testing in binaural tasks. In addition, the methods and implementation are entirely sound and the work is well-motivated, and falls within the scope of this journal. Clinically, there is a need in the field to understand the impact of bilateral implantation on the developing auditory system and its ability to provide the necessary cues for sound localization. This study provides a good stepping stone for future developmental studies that would like to probe whether better processing strategies, or early intervention with cochlear implants, is the more pressing issue for ascertaining ITD sensitivity. However, there were major logical flaws or gaps in the manuscript. These could easily be fixed with changes to the language in the Introduction and Discussion. These major issues are outlined below:

1) The concept of the "critical period" is not well-defined. More of the literature on the relationship between critical periods and ITD sensitivity should be surveyed in order to understand what the authors are referring to in the context of the present study. First, it would be very helpful to the reader to be able to visualize, with a schematic or timeline, how the animals in the current study were, deafened, implanted, and subsequently trained with ITDs, compared with other studies.

We thank reviewer 3 for this suggestion and have added a new Figure 1 for a better overview of our four experimental groups which shows the time and type of treatment for each of the cohorts.

Second, what is this nature of this critical period? Is it a period for development of a skill, a vulnerable period, a period for recovery from deprivation? These examples are taken from Kral 2013 as examples of previously published definitions. I think it might be in the authors' benefit to clearly define what this critical period represents so that the interpretation of the results can be easily understood.

We thank the reviewer for this excellent suggestion. The critical period referred to here is indeed as defined in Kral, (2013). To clarify this definition, we have inserted the following sentence in the Introduction: “In other words, if there are no binaural inputs with useful ITDs during a presumed "critical" developmental period, then the neural circuitry needed for ITD sensitivity is thought to fail to develop, leading to a perhaps permanent deficit (Kral, 2013).”

2) Throughout the Introduction and Discussion there is an implicit suggestion about how deafness at an early age may not necessarily cause a loss of ITD sensitivity, and instead that CIs have limitations which could prevent learning of ITDs. However, a distinction needs to be made on what type of limitations are being debated: is it (a) the inability of CI to provide ITDs or is it the (b) exposure to inconsistent ITDs through bilateral CIs that is preventing learning of those cues? In the manuscript, the authors appear to advocate for the latter point. This distinction is further confused by the sentence in the Introduction, where the sensitivity of CI patients is mentioned, citing papers with a mix of measurements made with both clinical processors and synchronized research processors. Therefore, if the authors are suggesting that poor delivery of ITDs using unsynchronized processors is preventing learning of ITD cues in humans, then an equivalent condition with unsynchronized processors needs to be tested in the rats. This must be done in order to confidentially say that synchronized ITDs through the CI need to be conveyed in order to promote learning of binaural cues.

We agree entirely with reviewer 3 and have tried to remedy the points where our manuscript is not clear. We would like to suggest that the possibility a) put forward by the reviewer is not one of practical relevance. If a patient is fitted with CIs in both ears then the stimuli delivered to the left and right ear respectively will inevitably have some temporal relationship and hence there have to be ITDs. Every form of binaural stimulation must necessarily “provide ITDs”, but whether these ITDs are delivered in a manner that is consistently informative and usable by the patients’ auditory system is of course a very different question. We are therefore ultimately exclusively interested in question b). But fully answering that question was not the objective of this study. Here we merely aim to falsify the critical period hypothesis. Fully mapping out what quality of ITDs need to be provided when in order to achieve good ITD sensitivity through prosthetic hearing will require several follow-on studies with appropriately chosen controls and age cohorts. With our study we have laid the foundation for this important work by developing the first valid behavioural, bilateral CI animal model which clearly, at least under ideal conditions can develop very high levels of ITD sensitivity despite severe deafness during hypothetical critical periods. In the revised manuscript we have tried to be more careful not to overstate the implications of the present set of results, and to stress the importance of future studies designed to contrast the effect of different binaural simulation regimes on eventual ITD sensitivity outcomes.

3) A final consideration, the authors should clearly state the hypotheses of the study. The authors have designed a unique study with two experiments, one behavioral and one physiological, whereby response lateralization and the proportion of ITD sensitive neurons were used as a metric to understand whether ND rats were able to (a) utilize ITDs and (b) possessed the underlying neural structures which reflect the behavioral data. It is imperative for the authors to state a set of hypotheses in relation to these two experiments and subsequently describe to the reader how they were carried out and measured. This means that, somewhere in the Introduction, there should be a justification for measuring response lateralization and proportion of ITD sensitive neurons/SNR for the behavioral and physiological experiments, respectively.

We thank reviewer 3 for this valuable suggestion. We have thoroughly revised the Introduction to make it clear that the principal aim of our study is simply to test the critical period hypothesis through our behavioral work. In the event, our results falsify that hypothesis. The electrophysiology data are presented as “corroborating and illustrative” data, showing that midbrains of ND rats have a high amount of ITD sensitivity, which is hardly surprising given how well these animals can learn to lateralize ITDs. But the electrophysiology was not set up as a formal hypothesis test.

To incorporate reviewer 3’s suggestion, we reworked the Introduction, and the penultimate paragraph of the Introduction now reads:

"In summary, the critical period hypothesis is widely considered a likely explanation for the poor ITD sensitivity reported in binaural CI patients, but it has not been subjected to attempts of direct experimental falsification. Therefore, our objective here was to examine experimentally the contrary hypothesis, namely that very good ITD sensitivity can be induced even after periods of severe hearing loss throughout infancy, provided that the stimulation used in early rehabilitation is optimized for ITD encoding. Given the great difficulties described in examining this hypothesis in a clinical setting, we sought to test this hypothesis by developing an animal model which is amenable to psychoacoustic behavior testing."

Introduction:

Needs a thorough literature review on the auditory critical period in relation to ITD sensitivity. The authors should also discuss why there hasn't been a rigorous study to understand critical periods and ITD sensitivity.

We have tried hard to accommodate the reviewer’s request, and extended our discussion of previous work on ITD critical periods both in the Introduction and in the Discussion section. We have also tried to highlight some of the difficulties that would be involved in studying critical periods in a clinical setting, but we aren’t sure whether that meets the reviewer’s demand that we “discuss why there hasn't been a rigorous study to understand critical periods and ITD sensitivity”. Ultimately, the reasons why the research community hasn’t yet mapped out ITD critical periods are not knowable, but we have tried to say something relevant to this topic.

On the topic "critical period for the development of ITD sensitivity" Seidl and Grothe, 2002 showed in gerbils that this sensitivity matures only after the onset of hearing and that this development can be dramatically disturbed by altered acoustic input such as omnidirectional white noise between P10 and P25 with an ITD tuning that resembles an immature, juvenile system. If the normal hearing animals were exposed to noise in the adult, no changes were observed. The authors concluded that the development of ITD tuning requires normal acoustic experience in the early stages of development to possibly develop the inhibitory inputs of the ITD system. However, the study leaves open whether an early deafened and thus inexperienced system can still mature through sensory activation at a later stage, including the maturation of inhibitory inputs. Important evidence for such an adaptability of the adult auditory system was found in one of our earlier studies on neonatally deafened rats that were supplied with CIs as young adults (Rosskothen-Kuhl et al., 2018, ). Here, we show that the initial activation of the auditory system by electrical stimulation significantly upregulates inhibitory markers such as GAD65 and GAD67 in the auditory midbrain and thus, possibly also in an adult brain, important input-induced maturation processes can take place that may allow the development or fine tuning of ITD sensitivity.

Need to make distinction between early-deafened vs. late-deafened. This will help to understand the relation between human and rats.

This also relates to criticisms made by reviewer 2. In the thoroughly revised Introduction we have greatly expanded the description of the differences between these two patient groups.

Needs more information on training and plasticity in order to build up the question. While prior studies have shown that early deafness does impact ITD sensitivity, they do not suggest that sensitivity could not be relearned. Please describe evidence (if any) that supports the notion that lack of exposure to binaural cues in adulthood, rather than a total loss of binaural function early in life, limits sensitivity to binaural cues for early-deafened populations.

As stated earlier, a distinction should be made between synchronized delivery of binaural cues and unsynchronized bilateral implantation and which one of these is the contributing factor to improper learning or acquisition of ITDs in adulthood.

The extent to which it may be possible to relearn ITD sensitivity has not been explored in great detail, but we do know that simply clocking up more and more bilateral CI hearing experience with conventional processors does not lead to high levels of ITD sensitivity, as has been shown by studies such as Gordon et al., (2014). These have been incorporated in the extended Discussion.

Subsection “Varying degrees and types of ITD tuning are pervasive in the neural responses in the IC of ND rats immediately after adult cochlear implantation”: The interpretation of various shapes of tuning curves aren't always known to the average reader. It would be helpful, either here or in the data analysis section, to describe what each of the curve types described about the neurons that are being recorded from.

Interpreting tuning curve shapes is perhaps more an art than a science. Comparing it to reading tea leaves would perhaps be unfair, as there have been rigorous attempts to think about, for example what tuning curve shapes one might expect to see under various constraints and assumptions about optimal neural coding (eg. Harper and McAlpine, 2004) but the issues are complex and not all that relevant to the issues examined in this study. We merely wanted to point out that the tuning curves we observed are somewhat similar to those described by previous authors working on similar preparations (e.g. Smith and Delgutte, 2007). Please see subsection “Varying degrees and types of ITD tuning are pervasive in the neural responses in the IC of ND rats immediately after adult cochlear implantation”. We have tried to be clearer on our description of the tuning curve shapes and reworked the subsection “Data analysis” and hope that meets the reviewer’s requirements.

Discussion: Please expand on the importance of ITD processing in rats and what "generally poor" means in the context of the current study.

We expanded on the importance on ITD processing for rats and have clarified this sentence (Discussion).

Discussion: The logic here is a little hard to follow. The authors state that only 48% of IC neurons were found to be ITD sensitivity in ND subjects, conversely in the current study, 91% of IC neurons were ITD sensitive. The claim appears to be that because the proportion of ITD sensitive neurons was greater in the present study, that this is a hallmark of relearning of ITDs in the animal. However, nowhere in the introduction or the hypotheses of the paper was this addressed. The hypotheses of paper need to be clearly stated such that the reader can understand what measures from the (a) behavioral and (b) physiological data constitute "good sensitivity to ITDs".

We thank reviewer 3 for pointing out this misunderstanding here. As shown for group 3 in our new Figure 1, the IC recordings were made in neonatally deafened rats bilateral implanted with CIs as young adults. These rats had no previous hearing experience with electrical CI stimulation before IC recording. The 91% sensitive IC units were thus obtained without “relearning of ITDs” in these animals and our intention was to point out that in the optimal stimulation conditions (microsecond precise stimulation) used in this study even in a naive auditory system innate ITD sensitivity exists.

The objective of this study has been added at the end of the Introduction. Additionally, we have added some clarity to these sentences to improve the logic flow (Discussion).

Discussion: This sentence, "It is important to remember that…congenital cats" is very long and a little difficult to follow. I think it would be helpful if the authors previously described what characterized good vs. poor tuning. It sounds like the authors are trying to make a connection between the behavioral data and the neural data, however, this is unclear.

We have broken down this sentence and rewrote this paragraph in order to make the argumentation more clear (Discussion).

Discussion: There is a reference to "critical periods" in each of these animal models, however, the statements made are rather vague. In each the ferret, mouse, and gerbil models, there is a reference to certain synapses and their relationship to binaural circuity prior to the onset of hearing (i.e. the "critical period"). However, the authors do not explain what happens to the circuitry, and what is required to occur (either before or after the so-called "critical period") in order to overcome any deficits in ITD processing. In addition, the authors have not explicitly stated what makes the rat model in present study unique for understanding the implications of the hypothesized "critical period."

We thank the reviewer for pointing this out, and we have revised the Discussion and added the necessary clarifications.

Our animal model is unique because it allows for the first time to test ITD sensitivity behaviorally with a high degree of accuracy and sensitivity in an animal model in which hearing experience as well as stimulation and training or rehabilitation parameters can be freely manipulated and controlled. This has enabled us to show, in a first instance, that despite the lack of sensory input during a presumed critical period, the neonatally deaf auditory system can develop the ability to lateralize ITDs with high accuracy. Follow on studies will enable us and others to map out the conditions that determine eventual binaural auditory performance outcomes in detail.

Discussion: Authors should expand on why inhibition is important.

We have added additional information from Pecka et al., (2008) showing the importance of inhibition for ITD processing (Discussion).

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Summary:

The reviewers agree that the authors have made substantial improvements, especially with toning back the maladaptive plasticity hypothesis. However, there are two major remaining issues.

First, the writing of the Introduction still needs significant work, with more specific and crisper language, especially when referring to literature or ideas, such as when discussing the contribution of the critical period and related hypotheses as well as the role of training, as noted by reviewer 3. The authors also need to clarify interpretation of the rats' ability to use ITDs with exquisite precision as due to (1) provision of salient ITDs (independent of missed critical period) or (2) rigorous training and synchronized stimulation allowed rats to learn to use ITDs even though the critical period was missed.

We thank reviewer 3 for this feedback. With regard to the last point mentioned above, we would like to point out that it does, of course, to an extent, depend on how one defines a critical period. When we wrote the paper, we had the example of scholars of binocular vision in mind, who, as a general rule, think of critical periods “strong”, in the sense that almost no amount of training after closure of the critical period can fully compensate for the deficits incurred by the lack of appropriate input during the critical period. Consider that essentially no adult amblyopic patients ever achieve normal stereopsis or normal visual acuity in the affected eye, irrespective of the quality of the visual stimulation and rehabilitative training they receive after the critical period has closed. If the situation in CI ITD sensitivity was the same as that in amblyopia, then no amount of training could restore near normal ITD sensitivity in adulthood. Given that our adult implanted CI rats achieved ITD sensitivity that was no worse than that of normally hearing litter mates can only mean that there is no “strong” critical period for ITD sensitivity development in young rats. Of course, high quality stimulation and training too might play a role in the case of a “weak” critical period, but firstly, the small amounts of training needed for our behavioral ND rat cohort indicates that a critical period, if it exists at all, would have to be pretty weak indeed, and secondly, any “weak” critical period could only ever have weak explanatory power when we are trying to understand the causes of poor binaural outcomes for current early deaf CI patients. We therefore consider the weak critical period case scientifically and clinically uninteresting. We have substantially rewritten the Introduction to make the analogy to amblyopia explicit and emphasize that the aim of our study is to test the “strong critical period hypothesis”. That resolves the potential ambiguity in how our results should be interpreted which the reviewer had pointed out here.

There is also a new concern raised by reviewer 2 about the HE-CI group which the revision indicates were not chemically deafened before implantation. Specifically, there is a potential for electrophonic responses (e.g. recent work by Kral's group (Sato et al., 2016). The acoustic frequency corresponding to the parameters used in this paper would be approximately the period of the pulse, in this case 1/0.000164 or ~6000 Hz. It is not clear what tonotopic areas were recorded in the study and how electrophonic responses would affect the result. Even though the HE-CI group was given a conductive loss, this would not attenuate the signal as completely as chemical deafening, and inner/outer hair cells are likely to remain intact. and permit indirect electrophonic stimulation of inner hair cells via the outer hair cells, as well as direct electrical stimulation of inner hair cells. After discussion, all of the reviewers agreed that this is a substantial experimental confound, and also that this group does not add value to the study. Therefore, we recommend to remove this group.

We have followed the suggestion and removed data from the HE-CI group from this study and adapted Figure 1, Figure 3, and Figure 4 accordingly.

https://doi.org/10.7554/eLife.59300.sa2

Article and author information

Author details

  1. Nicole Rosskothen-Kuhl

    1. Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
    2. Neurobiological Research Laboratory, Section for Clinical and Experimental Otology, University Medical Center Freiburg, Freiburg, Germany
    Contribution
    Conceptualization, Resources, Data curation, Software, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    Contributed equally with
    Alexa N Buck
    For correspondence
    nicole.rosskothen-kuhl@uniklinik-freiburg.de
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4724-5550
  2. Alexa N Buck

    Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
    Contribution
    Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    Contributed equally with
    Nicole Rosskothen-Kuhl
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0124-9716
  3. Kongyan Li

    Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
    Contribution
    Methodology
    Competing interests
    No competing interests declared
  4. Jan WH Schnupp

    1. Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
    2. CityU Shenzhen Research Institute, Shenzhen, China
    Contribution
    Conceptualization, Resources, Data curation, Software, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    For correspondence
    jan.schnupp@googlemail.com
    Competing interests
    No competing interests declared

Funding

Hong Kong Government General Research Fund (GRF) (11100219)

  • Jan WH Schnupp

Friends Association "Taube Kinder lernen hören e.V."

  • Nicole Rosskothen-Kuhl

Hong Kong Health and Medical Research Fund (HMRF) (06172296)

  • Jan WH Schnupp

Shenzhen Science and Innovation Fund (JCYJ20180307124024360)

  • Jan WH Schnupp

German Academic Exchange Service (605728 (PRIME – Postdoctoral Researchers International Mobility Experience))

  • Nicole Rosskothen-Kuhl

Deutsche Forschungsgemeinschaft (Grant number EXC1086, Cluster of ExcellenceBrainLinks-BrainTools)

  • Nicole Rosskothen-Kuhl

Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg

  • Nicole Rosskothen-Kuhl

Universität Freiburg (Funding programme for Open Access Publishing)

  • Nicole Rosskothen-Kuhl

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We thank A Hyun Jung for assisting behavioral training of CI rats, P Ruther and the Cluster of Excellence BrainLinks-BrainTools (German Research Foundation, grant number EXC1086) for the support with recording electrodes. Work leading to this publication was supported by grants from the Hong Kong General Research Fund (11100219) and Medical Research Fund (06172296), the Shenzhen Science and Innovation Fund (JCYJ20180307124024360), the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF) and the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° 605728 (PRIME – Postdoctoral Researchers International Mobility Experience), and friends’ association 'Taube Kinder lernen hören e V'. The article processing charge was funded by the Baden-Wuerttemberg Ministry of Science, Research and Art and the University of Freiburg in the funding programme Open Access Publishing.

Ethics

Animal experimentation: All procedures involving experimental animals reported here were approved by the Department of Health of Hong Kong (#16-52 DH/HA&P/8/2/5) or Regierungspräsidium Freiburg (#35-9185.81/G-17/124), as well as by the appropriate local ethical review committee. All surgery was performed under ketamine and xylazine anesthesia, and every effort was made to minimize suffering.

Senior Editor

  1. Barbara G Shinn-Cunningham, Carnegie Mellon University, United States

Reviewing Editor

  1. Lina Reiss, Oregon Health and Science University, United States

Reviewer

  1. Lina Reiss, Oregon Health and Science University, United States

Publication history

  1. Received: June 16, 2020
  2. Accepted: January 7, 2021
  3. Accepted Manuscript published: January 11, 2021 (version 1)
  4. Version of Record published: January 18, 2021 (version 2)

Copyright

© 2021, Rosskothen-Kuhl et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 880
    Page views
  • 115
    Downloads
  • 6
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Peter H Chipman et al.
    Research Article

    Experience-dependent plasticity is a key feature of brain synapses for which neuronal N-Methyl-D-Aspartate receptors (NMDARs) play a major role, from developmental circuit refinement to learning and memory. Astrocytes also express NMDARs although their exact function has remained controversial. Here we identify in mouse hippocampus, a circuit function for GluN2C NMDAR, a subtype highly expressed in astrocytes, in layer-specific tuning of synaptic strengths in CA1 pyramidal neurons. Interfering with astrocyte NMDAR or GluN2C NMDAR activity reduces the range of presynaptic strength distribution specifically in the stratum radiatum inputs without an appreciable change in the mean presynaptic strength. Mathematical modeling shows that narrowing of the width of presynaptic release probability distribution compromises the expression of long-term synaptic plasticity. Our findings suggest a novel feedback signaling system that uses astrocyte GluN2C NMDARs to adjust basal synaptic weight distribution of Schaffer collateral inputs, which in turn impacts computations performed by the CA1 pyramidal neuron.

    1. Neuroscience
    David L Hocker et al.
    Research Advance

    Studies of neural dynamics in lateral orbitofrontal cortex (lOFC) have shown that subsets of neurons that encode distinct aspects of behavior, such as value,may project to common downstreamtargets. However, it is unclear whether reward history, which may subserve lOFC's well-documented role in learning, is represented by functional subpopulations in lOFC. Previously, we analyzed neural recordings from rats performing a value-based decision-making task, and we documented trial-by-trial learning that required lOFC (Constantinople et al., 2019). Here we characterize functional subpopulations of lOFC neurons during behavior, including their encoding of task variables. We found five distinct clusters of lOFC neurons, either based on clustering of their trial-averaged peristimulus time histograms (PSTHs), or a feature space defined by their average conditional firing rates aligned to different task variables. We observed weak encoding of reward attributes, but stronger encoding of reward history, the animal's left or right choice, and reward receipt across all clusters. Only one cluster, however, encoded the animal's reward history at the time shortly preceding the choice, suggesting a possible role in integrating previous and current trial outcomes at the time of choice. This cluster also exhibits qualitatively similar responses to identified corticostriatal projection neurons in a recent study (Hirokawa et al., 2019), and suggests a possible role for subpopulations of lOFC neurons in mediating trial-by-trial learning.