Population rate-coding predicts correctly that human sound localization depends on sound intensity

  1. Antje Ihlefeld  Is a corresponding author
  2. Nima Alamatsaz
  3. Robert M Shapley
  1. New Jersey Institute of Technology, United States
  2. Rutgers University, United States
  3. New York University, United States
3 figures, 2 tables and 1 additional file

Figures

Modeling results.

(A) Firing rate of a simulated nucleus laminaris neuron with a preferred ITD of 375 µs, as a function of source ITD. The model predicts source laterality based on the locus of the peak of the firing rate function. (B) Hemispheric differences in firing rates, averaged across all 81 simulated inferior colliculus units. Rate models assume that source laterality is proportional to firing rate, causing ambiguities at the lowest sound intensities. Inset: Reconstructed responses of an inferior colliculus unit. The unit predominantly responds contralaterally to the direction of sound (high-contrast traces). The hemispheric difference model subtracts this activity from the average rate on the ipsilateral side (example shown with low-contrast traces). (C) Mean population response using labelled-line coding across a range of ITDs and sound intensities. Inset: The root-mean square (RMS) difference relative to estimated angle at 80 dB SPL does not change with sound intensity, predicting that sound laterality is intensity invariant. (D) Mean population response using hemispheric-difference coding. For lower sound intensities, predicted source direction is biased towards midline (compare red and orange versus blue or yellow). For higher sound intensities, predicted source direction is intensity invariant (blue on top of yellow line). Inset: RMS difference relative to estimated angle at 80 dB SPL decreases with increasing sound intensity, predicting that sound laterality is not intensity invariant. Ribbons show one standard error of the mean across 100 simulated responses. Sound intensity is denoted by color (see color key in the figure).

https://doi.org/10.7554/eLife.47027.003
Behavioral results.

(A) Stimuli: spectrally flat noise, used in experiment 1 (dark grey) versus A-weighted noise, tested as a control for audibility in experiment 2 (light grey). The purple line shows the magnitude of the zero-phase inverse A-weighting filter. (B) Responses from one representative listener (TCW) across two sound intensities and the corresponding NLME fits for these data. (C and D) Perceived laterality as a function of ITD for C) spectrally flat noise (experiment 1) or D) A-weighted noise (experiment 2). Error bars, where large enough to be visible, show one standard error of the mean across listeners. Colors denote sound intensity. Insets illustrate magnified section of the plots. Circles show raw data, lines and ribbons show NLME fits and one standard of the mean.

https://doi.org/10.7554/eLife.47027.004
Conceptual model of canonical computation of location.

(A) Computing sound direction requires analysis of the binaural difference between the signals reaching the left and right ear. (B) Estimating visual depth hinges on analysis of the binocular disparity between the signals reaching left and right eye. (C) For both hearing and vision, the proportion of the neural population that is stimulated (in the inferior colliculus or V3) depends both on the physical dimension to be estimated (source laterality or source distance) and the intensity of the stimulus (sound intensity or visual contrast). For hearing and vision, ambiguity in this putative neural code predicts D) biased responses at low stimulus intensities (sound intensity or contrast).

https://doi.org/10.7554/eLife.47027.007

Tables

Table 1
Results of Nonlinear Mixed Effects Model for flat-spectrum noise condition.

Note that Laterality:sound intensity refers to the NLME weight attributed to acoustic sound intensity of the auditory target. In contrast, Laterality:audibility captures the NLME weight attributed to pure tone audiometric thresholds based on the listeners’ perceptual abilities (see Materials and methods for details).

https://doi.org/10.7554/eLife.47027.005
DescriptionValueStd.errort-valuep-value
Intercept: ITDαx00.060.041.580.11
Slope: ITDαx12.450.0546.15<0.001***
Slope: sound intensityαx20.020.011.470.14
Laterality: sound intensityαy10.050.017.59<0.001***
Laterality: audibilityαy20.010.0024.86<0.001***
  1. 10986 degrees of freedom.

Table 2
Results of Nonlinear Mixed Effects Model for inverse A-weighted noise condition.
https://doi.org/10.7554/eLife.47027.006
NLME weightValueStd.errort-valuep-value
Intercept: ITDαx0−0.600.03−19.28<0.001***
Slope: ITDαx12.570.0646.26<0.001***
Slope: sound intensityαx20.060.014.98<0.001***
Laterality: sound intensityαy10.040.017.10<0.001***
Laterality: audibilityαy20.010.0023.30<0.001***
  1. 10986 degrees of freedom.

Additional files

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Antje Ihlefeld
  2. Nima Alamatsaz
  3. Robert M Shapley
(2019)
Population rate-coding predicts correctly that human sound localization depends on sound intensity
eLife 8:e47027.
https://doi.org/10.7554/eLife.47027