1. Neuroscience
Download icon

Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat

  1. Ninad B Kothari
  2. Melville J Wohlgemuth
  3. Cynthia F Moss  Is a corresponding author
  1. Johns Hopkins University, United States
Research Article
  • Cited 5
  • Views 2,282
  • Annotations
Cite this article as: eLife 2018;7:e29053 doi: 10.7554/eLife.29053

Abstract

Essential to spatial orientation in the natural environment is a dynamic representation of direction and distance to objects. Despite the importance of 3D spatial localization to parse objects in the environment and to guide movement, most neurophysiological investigations of sensory mapping have been limited to studies of restrained subjects, tested with 2D, artificial stimuli. Here, we show for the first time that sensory neurons in the midbrain superior colliculus (SC) of the free-flying echolocating bat encode 3D egocentric space, and that the bat’s inspection of objects in the physical environment sharpens tuning of single neurons, and shifts peak responses to represent closer distances. These findings emerged from wireless neural recordings in free-flying bats, in combination with an echo model that computes the animal’s instantaneous stimulus space. Our research reveals dynamic 3D space coding in a freely moving mammal engaged in a real-world navigation task.

https://doi.org/10.7554/eLife.29053.001

eLife digest

Humans and other animals can navigate their natural environments seamlessly, even if there are obstacles in their path. However, it is not well understood how an animal’s brain processes information from the senses to map where it is in relation to these objects, both in terms of distance and direction.

Bats can help answer these questions because they use a biological navigation system: echolocation. Bat produce high-pitched squeaks and then listen to the echoes that return when the sound bounces off of nearby objects. A bat can then use this information to estimate both which direction an object is, and how far away it is. Bats can also change their echolocation signals to focus on different objects, and researchers can record and analyze these signals to directly measure what the bat is paying attention to.

Kothari, Wohlgemuth and Moss have now investigated how the brain cells of bats process the animals’ movements while flying in three-dimensional space. A wireless probe was inserted into the midbrain region of each bat to detect whenever there was an electrical impulse in the nearby brain cells. The bats were then allowed to fly freely in a large room that contained obstacles, while each bat’s echolocation signals and brain activity were recorded.

The experiments revealed a group of brain cells that codes for the position of an object in three-dimensional space. Kothari, Wohlgemuth and Moss noted that these brain cells register the distance to objects more precisely when the bat changed its echolocation behavior to focus on those objects. Moreover, the activity in the bat’s brain also shifted when the bat noticed a closer object. These findings are not only relevant to echolocating bats, but rather reflect the general role that shifts in attention may play when many species map the locations of objects around them.

Further similar studies with other species would contribute to a more complete understanding of animals’ nervous systems work under natural conditions. In the future, these findings, and the studies that build upon them, could be applied to other fields of research like medicine or engineering. For example, smart wireless devices, designed to record and transmit physiological measurements based on movement, could efficiently monitor human health, and robots equipped with adaptive sonar could navigate effectively in complex environments.

https://doi.org/10.7554/eLife.29053.002

Introduction

As humans and other animals move in a 3D world, they rely on dynamic sensory information to guide their actions, seek food, track targets and steer around obstacles. Such natural behaviors invoke feedback between sensory space representation, attention and action-selection (Lewicki et al., 2014). Current knowledge of the brain’s representation of sensory space comes largely from research on neural activity in restrained animals, generally studied with 2D stimuli (Van Horn et al., 2013); however, far less is known about 3D sensory representation, particularly in freely moving animals that must process changing stimulus information to localize objects and guide motor decisions as they navigate the physical world.

Animals that rely on active sensing provide a powerful system to investigate the neural underpinnings of sensory-guided behaviors, as they produce the very signals that inform motor actions. Echolocating bats, for example, transmit sonar signals and process auditory information carried by returning echoes to guide behavioral decisions for spatial orientation (Griffin, 1958). Work over the past decade has revealed that echolocating bats produce clusters of sonar calls, termed sonar sound groups (SSGs), to closely inspect objects in their surroundings or to negotiate complex environments (Kothari et al., 2014; Moss et al., 2006; Petrites et al., 2009; Sändig et al., 2014). We hypothesize that the bat’s sonar inspection behavior sharpens spatio-temporal echo information processed by the auditory system in a manner analogous to the active control of eye movements to increase visual resolution through sequences of foveal fixations (Hayhoe and Ballard, 2005; Moss and Surlykke, 2010; Tatler et al., 2011). Importantly, the bat’s acoustic behaviors provide a quantitative metric of spatial gaze, and can thus be analyzed together with neural recordings to investigate the dynamic representation of sensory space.

Echolocating bats compute the direction of echo sources using a standard mammalian auditory system (Wohlgemuth et al., 2016). The dimension of target distance is computed from the time delay between sonar emissions and echoes (Simmons, 1973). Neurophysiological investigations of echo processing in bats reveal that a class of neurons shows facilitated and delay-tuned responses to simulated pulse-echo pairs. It has been hypothesized that echo delay-tuned neurons carry information about the distance to objects (Feng et al., 1978; O’'Neill and Suga, 1982; Suga and O'Neill, 1979; Valentine and Moss, 1997); however, the neural representation of target distance in bats listening to self-generated echoes reflected from physical objects has never previously been empirically established.

The midbrain superior colliculus (SC) has been implicated in sensory-guided spatial orienting behaviors, such as visual and auditory gaze control in primates, cats and barn owls (Knudsen, 1982; Krauzlis, 2004; du Lac and Knudsen, 1990; Middlebrooks and Knudsen, 1984; Munoz et al., 1991; Sparks, 1986; Stein et al., 1989), prey-capture behavior in frog and pit viper (Grobstein, 1988; Hartline et al., 1978; Newman and Hartline, 1981), and echolocation in bats (Valentine and Moss, 1997; Valentine et al., 2002). Previous work has also demonstrated that the SC is an integral part of the egocentric spatial attention network, specifically for target selection and goal-directed action (Krauzlis et al., 2013; Lovejoy and Krauzlis, 2010; McPeek and Keller, 2004; Mysore and Knudsen, 2011; Zénon and Krauzlis, 2012). Work in freely behaving rodents has also demonstrated a more general role of the SC in sensory-guided orienting behaviors (Duan et al., 2015; Felsen and Mainen, 2008). Additionally, measures of the local field potential (LFP) in the midbrain optic tectum (avian homologue of the SC) have shown that increases in the gamma band (~40–140 Hz) correlate with attention to sensory stimuli (Sridharan and Knudsen, 2015). The research reported here is the first to investigate the behavioral modulation of depth-tuned single unit responses and gamma band oscillations in the SC of a mammal inspecting objects in its physical environment.

Prior work on sensorimotor representation in the mammalian SC has been largely carried out in restrained animals performing 2D tasks, leaving gaps in our knowledge about the influence of action and attention on sensory responses in animals moving freely in a 3D physical environment. To bridge this gap, we conducted wireless chronic neural recordings of both single unit activity and LFPs in the SC of free-flying bats that used echolocation to localize and inspect obstacles along their flight path. Central to this research, we developed a novel echo model to reconstruct the bat’s instantaneous egocentric stimulus space, which we then used to analyze echo-evoked neural activity patterns. Our data provide the first demonstration that neurons in the midbrain SC of a freely moving animal represent the 3D egocentric location of physical objects in the environment, and that active sonar inspection sharpens and shifts the depth tuning of 3D neurons.

Results

Big brown bats, Eptesicus fuscus, flew in a large experimental test room and navigated around obstacles (Figure 1A, wall landing; Figure 1B, platform landing); they received a food item after each landing. The bats showed natural adjustments in flight and sonar behaviors in response to echoes arriving at their ears from objects in the room. The positions of objects were varied across recording sessions, and the bats were released from different points in the room within recording sessions, to limit their use of spatial memory for navigation and instead invoke their use of echo feedback. We specifically tested whether the bats relied on spatial memory to guide their navigation by analyzing their flight trajectories over repeated trials. Our analysis considered whether the bats showed stereotypy in their flight paths, an indicator of memory-based flight (Griffin, 1958), by constructing 2D spatial cross correlations of the flight trajectories across trials within each experimental session (Barchi et al., 2013). Our results show low correlation numbers, and confirm that bats were not relying on spatial memory (Falk et al., 2014), but instead active sensing, in this flight task (Figure 1—figure supplement 1, also see Materials and methods).

Figure 1 with 3 supplements see all
Experimental setup and methodology.

(A) Configuration of the experimental flight room for wireless, chronic neural recordings from freely flying echolocating bats. Shown is the bat (in brown) with the neural telemetry device mounted on the head (in green). The telemetry device transmits RF signals to an RF receiver connected to an amplifier and an analog-to-digital recording system. The bat’s flight path (in red) is reconstructed by 16 motion capture cameras (not all are shown) tracking three reflective markers mounted on the dorsal surface of the telemetry device (3 mm round hemispheres). While the bat flies, it encounters four different, cylindrical flight obstacles (shown in grey), and the sonar vocalizations are recorded with a wide-band microphone array mounted on the walls. (B) Overhead view of the room in the platform-landing task. The bat flew across the room (red line) using echolocation to navigate (black circles are sonar vocalizations) while recordings were made wirelessly from the SC (as shown in pane A). Vocalizations produced on this trial are shown in greater detail in bottom panels (filtered audio trace and corresponding spectrogram). The inset, on the right, shows a zoomed-in view of the spectrogram of one call, indicated by the red box. (C) Histological reconstruction of the silicon probe tract through the superior colliculus (SC) of one bat in the study. Shown are four serial coronal sections, approximately 2.5 mm from bregma, at the location of the SC. Lesions at the site of the silicon probe are indicated with black arrows. Also marked in the most rostral section are the locations of the SC, medial geniculate body (MGB), hippocampus (HPC), cortex, and dentate gyrus (DG). (D) Simultaneous neural recordings from SC from the recording sites identified with a blue square and green square in the silicon probe layout panel in Figure 1C. (layout of the 16-channel silicon probe used for SC recordings). (E) Top, change in sonar pulse duration as a function of object distance. Bottom, change in pulse interval as a function of object distance.

https://doi.org/10.7554/eLife.29053.003

While the bats performed natural sensory-guided behaviors, sonar calls were recorded using a wide-band ultrasound microphone array (Figure 1A,B – grey circles are microphones; see Figure 1B, raw oscillogram in middle panel and spectrograms in bottom panel and inset). The bat’s 3D flight trajectory and head aim were measured using high-speed Vicon motion capture cameras (Figure 1A,B, frame-rate 300 Hz). In flight, bats displayed natural adaptations in sonar behavior (Griffin, 1958; Simmons et al., 1979). Specifically, they increased echolocation pulse rate (PR) and decreased pulse duration (PD) as they approached objects or their landing points (Figure 1E), and they also produced sonar sound groups (SSGs), or clusters of vocalizations, to inspect objects in space (Falk et al., 2014; Moss et al., 2006; Petrites et al., 2009; Sändig et al., 2014; Wheeler et al., 2016).

Extracellular neural activity was recorded with a 16-channel silicon probe, affixed to a microdrive, and implanted in the bat SC. Neural activity was transmitted wirelessly via radio telemetry (Triangle BioSystems International; Figure 1A – green box). Figure 1C shows histology of SC recording sites, and Figure 1D shows simultaneous neural recordings from two channels (see also Materials and methods). Figure 1—figure supplement 2, demonstrates single cell neural recordings across multiple trials (also see Figure 1—figure supplement 3 for clustering efficacy).

Echo model - Reconstructing the instantaneous acoustic stimulus space at the ears of the bat

To measure auditory spatial receptive fields in the bat SC, we first determined the azimuth, elevation and distance of objects, referenced to the bat’s head direction and location in the environment (Figure 2—figure supplement 1A shows a cartoon of a bat with a telemetry recording device and markers to estimate the bat’s head direction, Figure 2—figure supplement 1B shows a top view of the bat’s head with the telemetry device and head tracking markers, also see Materials and methods). In order to determine the 3D direction and arrival time of sonar echoes returning to the bat, we relied on the physics of sound to establish an echo model of the bat’s instantaneous sensory space. The echo model takes into account an estimate of the beam width of the bat’s sonar calls, its 3D flight trajectory, its head direction, as well as physical parameters of sound (Figure 2—figure supplement 1A and B – schematic, see Materials and methods) to compute a precise estimate of the time of arrival of echoes at the bat’s ears, as well as the 3D location of the echo sources (Figure 2A – cartoon explains the echo model, with cones showing the sonar beam pattern, Figure 2B – the time series of call and echoes from the cartoon in Figure 2A; Figure 2C – actual bat flight trajectory with sonar vocalizations, orange circles, and 3D head aim vectors, black lines; Figure 2D and E – the instantaneous solid angles of the head aim with respect to objects and echo arrival times of sonar returns from different objects along the trajectory in 2C; also see Materials and methods).

Figure 2 with 2 supplements see all
Use of the echo model to determine the bat’s ongoing sensory signal reception.

(A) Cartoon of a bat flying through space encountering two obstacles. The bat’s flight trajectory moves from right to left, and is indicated by the black dotted line. Two sonar vocalizations while flying are indicated by the gray cones. (B) Reconstruction of sonar vocal times (top), and returning echo times (bottom) for the cartoon bat in panel a. Note that two echoes (blue and yellow) return to the bat following the first sonar vocalization, while only one echo (yellow) returns after the second vocalization, because the relative positions of the bat and objects change over time. (C) One experimental trial of the bat flying and navigating around obstacles (large circular objects). The bat’s flight path (long black line) starts at the right and the bat flies to the left. Each vocalization is indicated with a yellow circle, and the direction of the vocalization is shown with a short black line. (D) Trial time versus solid angle to each obstacle for flight shown in C. Individual vocalizations are indicated with black circles, and the color of each line corresponds to the objects shown in C. (E) Time expanded spectrogram of highlighted region in D. Shown are three sonar vocalizations, and the colored lines indicate the time of arrival of each object’s echo as determined by the echo model (colors as in C and D). (F) Snapshot of highlighted region (open black circle) in panel C showing the position of objects when the bat vocalized at that moment. (G) Snapshot of highlighted region (open red circle) in panel C showing the position of objects when the bat vocalized at that moment. In panels F and G, orange circles are microphones (only part of the array is shown here).

https://doi.org/10.7554/eLife.29053.007

The echo model was used to construct the instantaneous acoustic sensory space of the bat each time it vocalized and received echoes from physical objects in its flight path. We first determined the onset of each vocalization produced by the bat, then the 3D position of the bat at the time of each sonar vocalization, and the 3D relative positions of flight obstacles. Past work has demonstrated that the big brown bat’s sonar beam axis is aligned with its head (Ghose and Moss, 2003; 2006), and the direction of the sonar beam was inferred in our study from the head-mounted markers showing the head aim of the bat. We then referenced the 50 deg −6 dB width of the sonar beam at 30 kHz (Hartley and Suthers, 1989), and the time at which the sonar beam reflected echoes from flight obstacles in the animal’s path. From this calculation, we computed the direction and time of arrival of all echoes returning to the bat’s ears each time the animal emitted a sonar call.

Although it is possible to use a wireless, head-mounted microphone to record the returning echo stream, there are significant limitations to this methodology. First, a single head-mounted microphone has a higher noise floor than the bat’s auditory receiver and therefore does not pick up all returning echoes that the bat may hear. Moreover, a single microphone would add weight to the devices carried by the bat in flight and could only provide information regarding echo arrival time, not sound source direction. A head-mounted microphone is therefore insufficient to compute the 3D locations of echo sources, thus highlighting the importance of the echo model in our study to compute the bat’s instantaneous 3D sensory space.

We computed errors in the measurements of head-aim as well as in the estimation of echo arrival times at the bat’s ears (Figure 2—figure supplement 2). Our measurements indicate that the maximum error in the reconstruction of the bat head-aim does not exceed 5.5 degrees, and the error in echo arrival time measurement is between 0.35 and 0.65 ms (see Figure 2—figure supplement 2C and D – estimation of errors in head-aim reconstruction, Figure 2—figure supplement 2 – errors in echo arrival time; see Materials and methods). To confirm that the echo model accurately calculated the 3D positions of sonar objects, we used echo playbacks from a speaker and microphone pair (see Materials and methods, Figure 2—figure supplement 2), with additional validation by using a microphone array placed behind the bat’s flight direction. The microphone array recorded the echoes reflected off objects as the bat flew and produced sonar vocalizations, which were analyzed with time of arrival difference (TOAD) algorithms to compare the measured echo sources with the calculated echo sources based on our echo model (see Materials and methods).

3D spatial tuning of single neurons in the SC of free flying bats

The establishment of the echo model was a critical step in computing 3D spatial tuning of SC neurons recorded from the animals in flight. The spatial acoustic information (echo arrival times and 3D locations of echo sources) obtained from the echo model was converted into 3D egocentric coordinates to compute the acoustic stimulus space from the point of view of the flying bat as it navigated the room (Figure 1—figure supplement 1F and G, see Materials and methods). Bats were released from different locations in order to cover the calibrated volume of the flight room (Figure 3—figure supplement 1A), and they continuously produced echolocation calls, which resulted in series of echoes from objects during each recording session (Figure 3—figure supplement 1A,B and C). We also released the bats from multiple locations in the room so that they took a variety of flight paths through the room, and interacted with the flight obstacles from a broad range of directions and distances, which is necessary for computing spatial receptive fields. These data therefore yielded measurements of echoes returning to the animal from objects at many different directions and distances in egocentric space (Figure 3—figure supplement 1D - range coverage, E - azimuth coverage, and F - elevation coverage).

The output of the echo model was used to analyze audio/video-synchronized neural recordings from single units (see Figure 1E, Figure 1—figure supplement 1 and Materials and methods) taken in the midbrain SC using a 16-channel wireless telemetry system. We recorded a total of 182 single neurons. We then classified neurons as sensory (n = 67), sensorimotor (45), vocal premotor (n = 26), or unclassified (n = 44), as described in the Materials and methods section. Here we focus on sensory neurons in the SC of free-flying bats.

For all sensory neurons we first calculated the distance, or echo-delay tuning (Figure 3A and B). An example reconstruction of a neuron’s spatial tuning along the distance axis is displayed in Figure 3B, showing neural activity aligned to sonar vocalization times (red arrows), and responses to echoes returning at ~10 ms delay. Arrival time of the first echo at the bat’s ears is indicated with a green arrow, and a second returning echo (from another, more distant object) is indicated with a blue arrow. Note that this example neuron does not spike in response to the second echo, nor to echoes arriving very early (Figure 3C, top panel), or late (Figure 3C, bottom panel). Figure 3D shows the computed distance (echo-delay) tuning profile of this same example neuron.

Figure 3 with 2 supplements see all
Range tuning of midbrain neurons.

(A) A cartoon representation showing the target range estimation in a free-flying echolocating bat. The difference between the call production time (T0, red arrow) and the echo arrival time (TE, green arrow) is a function of target distance. (B) Sensory responses of a single neuron to echo returning at a specific delay with respect to sonar vocal onset from actual trial data. The arrival time of the first echo (TE1) is indicated with a green arrow, the second echo (TE2 – from a more distant object) is indicated with a blue arrow. Note that this neuron responds to the echo arriving at ~10 milliseconds. (C) When the echo returns at a shorter delay, the neuron does not respond; and the neuron similarly does not respond to longer pulse-echo delays. (D) Histogram showing target distance tuning (i.e. pulse-echo delay tuning) for the neuron in panel B and C. Note the narrow echo delay tuning curve.

https://doi.org/10.7554/eLife.29053.010

Using the echo model, we also calculated the tuning profiles of each neuron in azimuth and elevation (Figure 3—figure supplement 2A – azimuth and B – elevation). Once we calculated the azimuth, elevation, and distance tuning of neurons (Figure 4A), we constructed three-dimensional spatial response profiles for each neuron. Figure 4B shows surface plots of the three-dimensional tuning for two other example neurons. Of the 67 single sensory neurons (Bat 1–28 in green, and Bat 2–39 in brown) recorded in the SC of two big brown bats, 46 neurons (Bat 1–19 and Bat 2–27) in the data set showed selectivity to stimulus locations in 3D egocentric space (Figure 4C, see Materials and methods for details about spatial selectivity analysis), and these spatial tuning profiles were stable within recording sessions (Figure 4—figure supplement 1). Additionally, the selectivity of the neurons, in the distance dimension, did not vary as a function of dorsal-ventral location in the SC (Figure 4—figure supplement 2). Further, three neurons were tuned to both azimuth and range, two were tuned to both range and elevation, and five, three and three neurons were tuned exclusively to range, azimuth and elevation, respectively (see Figure 4—figure supplement 3). Best echo delays spanned values of 4 to 12 ms, corresponding to the distances of objects encountered by the bat (~70–200 cm) in our flight room (Figure 4D, E and F show histograms of standard deviations of normal fits to spatial receptive fields, also see Materials and methods).

Figure 4 with 3 supplements see all
Spatial tuning of neurons recorded in the SC.

(A) Egocentric locations of echo sources eliciting activity from a single SC neuron. Red dots indicate echo source locations eliciting spikes, black dots indicate echo source locations where a spike is not elicited. Contour plots show the XY, YZ, and ZX projections of the spatial tuning of the neuron. (B) 2D spatial tuning plots for two separate neurons (left column and right column). Shown are surface heat plots, where the size of the peak indicates the spike probability for a neuron for each 2D coordinate frame. (C) Centers of 3D spatial tuning for 46 different neurons recorded in the SC. Different bats are indicated by different colors (Bat 1 in green, Bat 2 in brown). (D, E and F) Left to right: azimuth, elevation, and range half width tuning properties for 46 different neurons recorded in the SC (colors as in panel C).

https://doi.org/10.7554/eLife.29053.013

Adaptive sonar behavior modulates 3D spatial receptive fields

Guided by growing evidence that an animal’s adaptive behaviors and/or attentional state can modulate sensory responses of neurons in the central nervous system (Bezdudnaya and Castro-Alamancos, 2014; Fanselow and Nicolelis, 1999; McAdams and Maunsell, 1999; Reynolds and Chelazzi, 2004; Spitzer et al., 1988; Winkowski and Knudsen, 2006; Womelsdorf et al., 2006), we investigated whether the bat’s active sonar inspection of objects in space alters the 3D sensory tuning of SC neurons. We compared the spatial receptive fields of single SC neurons when the bat produced isolated sonar vocalizations (non-SSGs) to times when it adaptively increased sonar resolution by producing SSGs (Figure 5A – an example trial; non-SSGs, blue circles; SSGs, red circles; Figure 5B – spectrograms from the data in 6A, with SSGs again highlighted in red; Figure 5C – a plot showing SSGs can be quantitatively identified, see Materials and methods).

Figure 5 with 1 supplement see all
Adaptive vocal behavior drives changes in spatial tuning of SC neurons.

(A) Three-dimensional view of one flight path (in black) through the experimental room. Individual sonar vocalizations that are not included in a sonar sound group (non-SSG) are shown as blue circles, and sonar vocalizations within a sonar sound group (SSG) shown in red. (B) Top, spectrogram of sonar vocalizations emitted by the bat in panel A. Bottom, expanded region of top panel to indicate SSGs and the definition of pulse interval (PI). (C) Change in pulse rate (1/PI) during the flight shown in panel A, and for the vocalizations shown in panel B. Note the increase in pulse rate indicative of SSG production. (D) Change in spatial tuning of example neuron when the bat is producing SSGs (red) as opposed to non-SSGs (blue). Note that the distance tuning decreases, as well as the width of the tuning curve, when the bat is producing SSGs. (E) Summary plot of change in spatial tuning width when the bat is producing SSGs (n = 53 neurons). Many single neurons show a significant sharpening (n = 26) in spatial tuning width along the distance axis when the bat is producing SSGs and listening to echoes, as compared to times when the bat is receiving echoes from non-SSG vocalizations (Bat 1 is indicated with green, Bat 2 is indicated with brown; units with significant sharpening at p<0.05 are indicated with closed circles, non-significant units indicated with open circles; Rank-Sum test). (F) Summary plot of change in mean peak spatial tuning when the bat is producing SSGs (n = 51 neurons). Many neurons show a significant decrease (n = 32) in the mean of the peak distance tuning during the times of SSG production as compared to when the bat is producing non-SSG vocalizations (Bat 1 is indicated with green, Bat 2 is indicated with brown; units with significant sharpening at p<0.05 are indicated with closed circles, non-significant units indicated with open circles; Brown-Forsythe test).

https://doi.org/10.7554/eLife.29053.017

We discovered that a neuron’s distance tuning is sharper to echo returns from the bat’s production of SSGs, as compared to responses to echoes returning from single (non-SSG) calls (Figure 5D shows an example neuron). Figure 5E shows summary data comparing the sharpness of distance tuning to echoes returning from SSG and non-SSG calls (n = 51, neurons which met the power analysis criterion, see Materials and methods for details about power analysis; data from Bat 1 is shown in green, Bat 2 in brown). Supplementary file 1A – gives details of sharpness of distance tuning comparisons for SSG and non-SSG tuning, using the Brown-Forsyth test, for each of the neurons in Figure 5E.

We also found that a neuron’s best echo delay (target distance) is often shifted to shorter delays (closer objects) when the bat is engaged in the production of SSGs, suggesting that distance tuning is dynamically remapped when the bat actively inspects objects in its environment (Figure 5D example). Figure 5F shows summary data, comparing the mapping of distance tuning of single neurons in response to echoes from SSG and non-SSG calls (n = 53 neurons which met the power analysis criterion, see Materials and methods for details about power analysis; data from Bat 1 is shown in green, Bat 2 in brown). Supplementary file 1B – gives details of mean distance tuning comparisons for SSG and non-SSG echo delay responses, using the Brown-Forsyth test. For each of the neurons in Figure 5E and F; filled circles indicate cells with a significant sharpening (Figure 5E), or a significant decrease in peak distance tuning in response to echoes from SSGs (Figure 5F); while open circles indicate non-significant comparisons (rank-sum, p<0.05). We also examined the responses to echoes returning from the first sonar vocalization of an SSG versus the last vocalizations of an SSG. We found that there is no difference in spatial tuning profiles computed separately for the first and last echoes of SSGs, but there is a significant increase in spike probability in response to echoes from the last vocalization of an SSG (Figure 5—figure supplement 1).

Gamma power increases during epochs of sonar sound group production

Similar to foveation, which is a behavioral indicator of visual attention to resolve spatial details (Reynolds and Chelazzi, 2004), measurements of adaptive sonar behavior have been used as a metric for the bat’s acoustic gaze to closely inspect objects (Moss and Surlykke, 2010). Previous behavioral research shows that bats increase the production of sonar sound groups (SSGs) under conditions that demand high spatial resolution, e.g. in dense acoustic clutter and when tracking erratically moving targets (Kothari et al., 2014; Moss et al., 2006; Petrites et al., 2009; Sändig et al., 2014). SSGs are clusters of echolocation calls, often produced at stable rate (Figure 6A, see Materials and methods), which are hypothesized to sharpen acoustic images of objects in the environment (Moss and Surlykke, 2010), and are distinct from the overall increase in sonar call rate of a bat approaching a target. Previous work in other systems has shown that the gamma frequency band (40–140 Hz - Sridharan and Knudsen, 2015) of the LFP in the SC increases in power when an animal is attending in space (Gregoriou et al., 2009; Gunduz et al., 2011; Sridharan and Knudsen, 2015), and we investigated whether this conserved indicator of spatial attention also appears during SSG production. Shown in Figure 6B is a comparison of gamma band activity during the bat’s production of SSGs over non-SSGs, demonstrating an increase around the time of SSG production. Displayed is the call triggered average (±s.e.m.) of the gamma band across recording sites, for SSG (red, n = 539) and non-SSG (blue, n = 602) production. Figure 6C illustrates the significant increase in gamma band power during the production of SSGs (red) as compared to non-SSGs (blue) on a neuron-by-neuron basis (n = 26), and this finding was consistent across recording depths (Figure 6—figure supplement 1). Only sites in which neural recordings were unaffected by motion artifact were included in this analysis (Figure 6—figure supplement 2, Also see Materials and methods). In agreement with past work in other systems and brain areas (Gregoriou et al., 2009; Gunduz et al., 2011; Sridharan and Knudsen, 2015), there was a significant increase in gamma power when the bat produced SSGs, providing further evidence that SSGs indicate times of sonar inspection and spatial attention (Figure 6C, p<0.005, Wilcoxon sign-rank test).

Figure 6 with 2 supplements see all
Increases in gamma power correlate with sonar-guided spatial attention.

(A) Schematic of sonar sound group (SSG) determination. SSG’s are identified by brief epochs of higher vocal rate (i.e. shorter interval in red) surrounded by vocalizations at a lower rate (i.e. longer interval in blue). (B) Average gamma waveform at the onset of single sonar vocalizations, or non-SSG’s (blue, n = 26), compared to the average gamma waveform at the onset of vocalizations contained within an SSG (red, n = 26). Plotted is the mean ±s.e.m. (C) Pair-wise comparison of power in the gamma band during the production of non-SSG vocalizations (blue) and SSG vocalization (red). There is a significant increase in gamma power during SSG production across neurons (n = 26, Wilcoxon sign-rank rest, p<0.01). (D) Normalized increase in gamma power at the time of auditory spike onset for each neuron during the production of non-SSG vocalizations. (E) Normalized increase in gamma power at the time of auditory spike onset for each neuron during the production of SSG vocalizations. Note the higher gamma power during SSG production, and the temporal coincidence of the increase in gamma with spike time (vertical white line indicates spike time, horizontal white line separates data from Bat 1, below and Bat 2, above.

https://doi.org/10.7554/eLife.29053.019

Additionally, we analyzed the timing of gamma power increase with respect to echo-evoked neural activity. Because sensing through echolocation temporally separates vocal production time from echo arrival time, we can accurately measure the amplitude of gamma activity with respect to motor production and/or sound reception. The data show that the increase in gamma power occurred specifically around the time of the echo-evoked spike events in SC sensory neurons (Figure 6D – SSGs and 6E – non-SSGs, vertical white line indicates onset of sensory evoked spikes, horizontal white line separates data from Bat 1, below, and Bat 2, above), and that the increase in gamma band power is temporally precise, with the peak in gamma power occurring within 10 milliseconds of spike time.

Discussion

Spatially-guided behaviors, such as obstacle avoidance, target tracking and reaching, all depend on dynamic egocentric sensory representations of the 3D positions of objects in the environment. An animal must not only compute the direction and distance to targets and obstacles, but also update this information as it moves through space. How does the nervous system of a freely moving animal encode 3D information about the location of objects in the physical world? And does active inspection of objects in the environment shape 3D sensory tuning? Our neural recordings from the midbrain of a freely moving animal engaged in natural, spatially-guided behaviors offer answers to these fundamental questions in systems neuroscience.

Here we present the first characterization of 3D sensory responses of single neurons in the midbrain SC of an animal actively interacting with its physical environment. We also show that echo-evoked spatial tuning of SC neurons sharpens along the range axis and shifts to closer distances when the bat inspects objects in its acoustic scene, as indexed by the production of sonar sound groups (SSGs) (Falk et al., 2014; Kothari et al., 2014; Moss et al., 2006; Petrites et al., 2009; Sändig et al., 2014). It has been hypothesized that the bat produces SSGs to enhance spatial resolution, in a manner similar to foveal fixation, which increases visual resolution (Moss and Surlykke, 2010; Surlykke et al., 2016). Our data provide the first empirical evidence of sharpened 3D spatial resolution of single neurons in the bat’s auditory system with natural and dynamic adaptations in the animal’s active orienting behaviors.

Role of the SC in 3D spatial orientation

The superior colliculus (SC), a midbrain sensorimotor structure, is implicated in species-specific sensory-guided orienting behaviors, target selection and 2D spatial attention (Duan et al., 2015; Knudsen, 2011; Krauzlis et al., 2013; Lovejoy and Krauzlis, 2010; McPeek and Keller, 2004; Mysore and Knudsen, 2011; Mysore et al., 2011; Zénon and Krauzlis, 2012). Past research has led to conflicting views as to whether the SC plays a role in orienting in 3D space (Chaturvedi and Gisbergen, 1998; Chaturvedi and van Gisbergen, 1999; Chaturvedi and Van Gisbergen, 2000; Hepp et al., 1993; Van Horn et al., 2013; Leigh and Zee, 1983; Walton and Mays, 2003), but limited evidence from sensory mapping in primates shows response selectivity to binocular disparity (Berman et al., 1975; Dias et al., 1991), and vergence eye movements (Chaturvedi and Gisbergen, 1998; Chaturvedi and van Gisbergen, 1999; Chaturvedi and Van Gisbergen, 2000; Van Horn et al., 2013), indicating a role of the SC in 3D visuomotor integration. Here, we present the first direct evidence of 3D egocentric sensory responses to physical stimuli in the midbrain of an animal freely moving through its environment. Our results therefore provide a critical bridge to understanding the brain’s dynamic representation of the 3D physical world.

Behavioral and neural correlates of spatial attention

Psychophysical studies have reported that human and non-human primates show increased visual detection and discrimination performance when stimuli are presented at attended locations (Bichot et al., 2005; Carrasco, 2011; Posner, 1980; Wurtz and Mohler, 1976; Yeshurun and Carrasco, 1999). Neural recording experiments have corroborated these results by showing that spatial attention modulates firing rates of cortical neurons representing attended locations (McAdams and Maunsell, 1999; Reynolds and Chelazzi, 2004; Reynolds et al., 1999; Spitzer et al., 1988; Womelsdorf et al., 2006). Other studies report an increase in the gain of tuning curves at an attended location or a selected stimulus feature, while a decrease in neural response occurs for unattended locations or features (McAdams and Maunsell, 1999; Treue and Martínez Trujillo, 1999; Verghese, 2001).

The midbrain SC has been specifically implicated in an attention network through past studies of SC inactivation that produced behavioral deficits (Lovejoy and Krauzlis, 2017; McPeek and Keller, 2004), but none of these studies measured the spatial selectivity of single SC neurons under conditions in which animals freely inspected objects in the physical environment. Evidence for sharpening of tuning curves and/or remapping spatial receptive fields with attention has been limited to a few studies showing shifts in 2D cortical tuning to artificial visual stimuli in restrained animals (Spitzer et al., 1988; Womelsdorf et al., 2006). And in studies of the auditory system, behavioral discrimination of acoustic stimuli has been shown to influence the response profiles of cortical neurons in restrained ferrets (Fritz et al., 2003, 2007). Here we report for the first time dynamic shifts in 3D sensory tuning with sonar-guided attention in animals engaged in natural orienting behaviors.

Our study not only revealed changes in single neuron 3D spatial selectivity with dynamic sonar inspection of objects in the physical scene, but also a corresponding increase in the gamma band of the local field potential (LFP). Past work in humans, non-human primates, other mammals, and birds have reported stimulus driven gamma band modulation when stimuli are presented at attended locations (Fries et al., 2001; Goddard et al., 2012a; Gregoriou et al., 2009; Sridharan and Knudsen, 2015; Sridharan et al., 2011). Moreover, changes in the gamma band of the LFP have been shown to occur for stimulus selection and discrimination mediated by touch, vision, and hearing, suggesting that gamma oscillations may reflect multi-modal network activity related to attention (Bauer et al., 2006; Canolty et al., 2006; Gruber et al., 1999; Senkowski et al., 2005). Our findings that gamma power increases during epochs of SSG production and echo reception support the hypothesis that the bat’s adaptive sonar behaviors serve as indicators of spatial attention (Moss and Surlykke, 2010).

3D allocentric versus 3D egocentric representations in the brain

It is important to emphasize the distinction between our report here on 3D egocentric sensory responses in the midbrain SC of the insectivorous echolocating big brown bat, and 3D allocentric memory-based representation of space in the hippocampus of the echolocating Egyptian fruit bat (Yartsev and Ulanovsky, 2013). These two distinct frames of reference are used for different suites of natural behaviors. Egocentric sensory representation of space contributes to overt and covert orienting to salient stimuli (Knudsen, 2011; Krauzlis et al., 2013; Mysore and Knudsen, 2011) and has not previously been described in free-flying bats. By contrast, 3D allocentric (Geva-Sagiv et al., 2015; Yartsev and Ulanovsky, 2013) and vectorial representations (Sarel et al., 2017) in the bat hippocampus support spatial memory and navigation. Further, published studies on the Egyptian fruit bat hippocampus have not considered the acoustic sensory space of this species that uses tongue clicks to echolocate (Yovel et al., 2010), nor potential modulation of hippocampal activity by sonar signal production. In other words, past work on the Egyptian fruit bat hippocampus shows 3D spatial memory representation; whereas, our study of the big brown bat SC reveals important new discoveries of state-dependent midbrain sensory representation of 3D object location.

Depth tuning of single neurons in the bat auditory system

Finally, and importantly, our results fill a long-standing gap in the literature on the neural representation of target distance in the bat auditory system, which has almost exclusively been studied in passively listening animals (Dear and Suga, 1995; Feng et al., 1978; O'Neill and Suga, 1979; Valentine and Moss, 1997), but see Kawasaki et al., 1988. Echolocating bats estimate target distance from the time delay between sonar call emission and echo reception, and show behavioral range discrimination performance of less than 1 cm, which corresponds to an echo delay difference of about 60 μsec (Moss and Schnitzler, 1995; Simmons, 1973). The bat’s sonar signal production is therefore integral to target ranging, and yet, for over nearly four decades of research, scientists have simulated the dimension of target distance in neural recording experiments in restrained bats by presenting pairs of synthetic sound stimuli (P/E pairs – pulse/echo pairs), one mimicking the echolocation call, and a second, delayed and attenuated signal, mimicking the echo. Here, we report the first delay-tuned neural responses to echoes from physical objects in the auditory system of free-flying bats, thus providing a critical test of a long-standing hypothesis that neurons in actively echolocating bats respond selectively to echoes from objects in 3D space.

Beetz et al. (2016a) report that distance tuning of neurons in the auditory cortex of passively listening, anesthetized bats (Carollia perspicillata) is more precise when neurons are stimulated with natural sonar sequences, such as those produced by echolocating bats in the research reported here. Another study of auditory cortical responses in anesthetized bats (Phyllostomus discolor) reports that delay-tuned neurons shift their receptive fields under stimulus conditions that simulate echo flow. (Bartenstein et al., 2014). In a related study, Beetz et al., 2016b show a higher probability of neural firing in cortical neurons of the bat species Carollia perspicillata to the first echo in a sequence, which leads them to hypothesize that global cortical inhibition contributes to the representation of the closest object, without active attention. It is possible that global cortical inhibition is an intrinsic feature, which enables an animal to represent the most salient (in the above case, closest) stimulus. Our data also show that sensory neurons respond primarily to the first echo arriving in a neuron’s receptive field, as compared to later echoes, and may depend on a similar mechanism. A mechanism of global inhibition for selective attention has also been demonstrated in the barn owl optic tectum (Mysore et al., 2010). Additionally, our data demonstrate a higher probability of auditory responses in the midbrain SC to echoes returning from the last echo of a SSG, a finding, which can only be demonstrated in a behaving echolocating bat, as it involves feedback between sensing and action. And while studies of auditory cortical processing in anesthetized, passively listening animals can shed light on sensory processing mechanisms, ultimately this information must be relayed to sensorimotor structures, such as the midbrain superior colliculus, which serve to orchestrate appropriate motor commands for spatial navigation and goal-directed orientation.

Our study reveals the novel finding that auditory neurons in awake and behaving echolocating bats show shifts and sharpening of spatial receptive fields with echolocation call dynamics. Crucially, because bats in our study were engaged in a natural spatial navigation task, we could directly investigate the effects of sonar-guided attention on the 3D spatial tuning of single auditory neurons. Our results demonstrate the dynamic nature of 3D spatial selectivity of single neurons in the SC of echolocating bats and show that active behavioral inspection of objects not only remaps range response areas, but also sharpens depth tuning. Furthermore, our data reveal echo-delay tuning of single SC neurons in response to echoes from actively echolocating bats is sharper than previously reported from recordings in passively listening bats (Dear and Suga, 1995; Menne et al., 1989; Moss and Schnitzler, 1989; Simmons et al., 1979; Simmons et al., 1990; Valentine and Moss, 1997) and bear relevance to a long-standing controversy on the neural basis of fine echo ranging acuity of bats (Menne et al., 1989; Moss and Schnitzler, 1989; Simmons, 1979; Simmons et al., 1990).

In summary, our study generated new discoveries in the field of systems neuroscience by integrating chronic neural recordings, multimedia tracking of dynamic animal behaviors in the 3D physical environment, and acoustic modeling. We report here the first empirical demonstration that neurons in a freely moving animal encode the 3D egocentric location of objects in the real world and dynamically shift spatial selectivity with sonar-guided attention. Specifically, we show that single neurons in the actively echolocating, free-flying bat respond selectively to the location of objects over a restricted distance (echo delay), azimuth and elevation. Importantly, we discovered that the sensory response profiles of SC neurons become sharper along the range axis and shift to shorter distances (echo delays) when the bat actively inspects physical objects in its environment, as indicated by temporal adjustments in its echolocation behavior. Our discovery of dynamic 3D sensory representations in freely behaving animals call for comparative studies in other species, which can collectively contribute to a more complete understanding of nervous system function in the context of natural behaviors.

Video 1
Experimental setup for validating the echo model.

This is a two-part movie. The first part shows the layout of the microphone array, which is used to capture the sonar vocalizations of the bat as it flies and navigates around objects in its path. For simplicity only two objects are shown here. The second part of the movie shows the use of the 14-channel echo microphone array, which captures the returning echoes as the bats flies in the forward direction. Note that the echo microphone array is placed behind the bat on the wall opposite to its flight direction.

https://doi.org/10.7554/eLife.29053.022
Video 2
Validation of echo model using time-difference-of-arrival (TDOA) algorithms.

This is a two-part movie. The first part consists of 3 panels. The top panel shows an example trajectory as the bat navigates across objects (white and green). The red line is the reconstructed trajectory and green circles along the trajectory are positions where the bat vocalized. The center and bottom panels are time series when the bat vocalizes and when echoes arrive at the bat’s ears, respectively. The echo arrival times have been computed using the echo model. The second part of the movie demonstrates the localizations of echo sources using TOAD algorithms. This movie has four panels. The top left panel shows the spectrogram representation of the recording of the bat’s vocalizations. The left center and bottom panels show spectrograms of 2 channels of the echo microphone array. The right panel shows the reconstructed flight trajectory of the bat. Echoes received on four or more channels of the echo microphone array, are then used to localize the 3D spatial location of the echo sources. These are then compared with the computations of the echo model and lines are drawn from the microphones to the echo source if the locations are validated.

https://doi.org/10.7554/eLife.29053.023

Materials and methods

Bats

Two adult big brown bats, Eptesicus fuscus, served as subjects in this study. Bats were wild caught in the state of Maryland under a permit issued by the Department of Natural Resources and housed in an animal vivarium at the University of Maryland or Johns Hopkins University. Both the University of Maryland’s, and Johns Hopkins University’s Institutional Animal Care and Use Committee approved all of the procedures utilized for the current study.

Experimental design

The two big brown bats were tested in related tasks, carried out in a 6 × 6 × 2.5 m room, illuminated with IR and equipped with 16 high-speed cameras and an ultrasound microphone array (Figure 1, see below). The first bat navigated around objects in a large flight room and landed on a platform. In order to ease the task for the second bat, it simply flew around the room, navigated around objects, and landed on any wall. Both bats were fed mealworms at the end of each trial to keep them active, but they were not rewarded for flight. The flight room was illuminated with infrared lighting (~850 nm) to preclude the bat’s use of vision, ERG data show that Eptesicus does not see wavelengths longer than 600 nanometers (Hope and Bhatnagar, 1979). The room was also equipped with high-speed cameras and an ultrasound microphone array to track the bat’s flight path and record the bat’s echolocation behavior. Bats navigated around obstacles in the room (explained in detail below), and were released at different locations in the room for each trial (eight positions for Bat 1, five different positions for Bat2), which required them to use sonar echoes to steer around obstacles rather than a consistent or memorized flight path around objects in the room (see Figure 3—figure supplement 1A). As such, the bats determined the duration and flight path of each trial. The obstacles were four plastic cylinders (hard plastic as to be acoustically reflective), approximately 13 cm in diameter and 30 cm in length.

Once the bat flew freely throughout the room and in the case of Bat 1, learned to land on a platform, a surgery was performed to implant in the midbrain superior colliculus (SC) a 16-channel chronic recording silicon probe (Neuronexus) mounted on a custom microdrive. The bats’ weights were between 18 and 21 grams, and the weight of the implant, microdrive and transmitter device was 3.8 grams. The bat was given several days to rest and acclimate to the implanted device, after which they were able to fly and navigate around objects in the flight room. Data collection began after the animal was able to perform ~30 flight trials per session, which took place twice a day (morning and afternoon) in the experimental test room. During experimental sessions, there was no conditional reward; instead the bats were fed mealworms at the end of every trial, that is, when they landed. Bat 1 flew for 12 sessions, and Bat 2 flew for 15 sessions. For each recording session, the positions of the four flight obstacles were varied. Further, across trials the bat was released from different locations in the room. The obstacle configurations and flight start locations were varied to ensure that the bat’s flight trajectories covered the entire room, and the stimulus space sampled by the bat changed from trial to trial. This approach prevented the bats from relying on spatial memory and/or stereotyped flight paths. Figure 3—figure supplement 1A shows the bat’s flight trajectories in a single session and illustrates room coverage. Coverage was restricted in elevation, due to the height of the flight room, with a floor to ceiling dimension of approximately 250 cm. Although the landing behavior of the bats differed slightly (i.e. landing on a platform vs. a wall), neural analysis was focused on the times when the animals were in flight and the data from the two bats are comparable. Additionally, both bats performed natural echolocation and flight behaviors as neural recordings were taken.

Video recording

The flight trajectory of the bat was reconstructed using a motion tracking system with 16 high-speed cameras (Vicon). The motion tracking system was calibrated with a moving wand-based calibration method (Theriault et al., 2014), resulting in sub-millimeter accuracy and 3D spatial location information of the bat at a frame rate of 300 Hz. Once the motion tracking system is calibrated, it tracks the bat in a 3D coordinate frame of reference, which we refer to as ‘world coordinates.’ Affixed on the dorsal side of the transmitter board were three IR reflective markers (3 mm round) that were then tracked with the high-speed motion tracking system (Vicon). By tracking the 3D position of these three markers, we were able to determine the 3D position and head aim of the bat during the experiment. Around the perimeter of the room, at a distance from the walls of about 0.5 meters, the motion capture cameras did not provide adequate coverage, and data from the bat at these locations was not used for analysis.

Audio recordings

In addition to recording the position of the bat, we also recorded the sonar calls of the bat using an array of ultrasonic microphones (Pettersson Elektronik, Ultrasound Advice, see Figure 1A). The microphone recordings were hardware bandpass filtered between 10 KHz and 100 KHz (Alligator Technologies and Stanford Research Systems) and were digitized using data acquisition systems (National Instruments + custom built hardware).

Synchronization of systems

All three hardware systems (i.e. neural recording, video-based 3D positioning, and microphone array) were synchronized using the rising edge of a square pulse generated using a custom circuit. The square pulse was manually triggered at the end of each trial (i.e. at the end of each individual flight) when the bat landed on the platform/wall. At the generation of the TTL pulse, each system (video and audio) saved 8 s of pre-buffered data into the hard disk of the local computer.

Analysis of flight behavior

To ensure that the bats were not using spatial memory to guide their flight, we randomly released the bats from different spatial locations in the flight room. The average number of flights per session were 22 for Bat 1 and 27 for Bat 2. Further, we used eight positions (a-h) for releasing Bat 1 and 6 positions (a-f) for releasing Bat 2. To evaluate stereotypy in the bats’ flight paths, we used methods previously developed by Barchi et al., 2013. Occupancy histograms were created by collapsing the 3D trajectory data to 2D plan projection (x,y and x,z). The number of points across a set of flight paths that fell inside 10 cm2 bins were counted. These points were converted to probabilities by dividing each bin count by the total number of points across each set of flights. After normalization, the occupancy histograms of trials could be compared within each session. The next step was to compute the autocorrelation of each trial and cross-correlation of each trial with every other trial. The maximum value of each 2D cross-correlation was divided by the maximum value of the autocorrelation. This ratio is shown as a matrix for a representative session for both bats in Figure 1—figure supplement 1. The value of each square along the diagonal is one (yellow on the color bar) as it represents the autocorrelation of each flight trajectory. Cooler colors indicate minimum correlation between flight trajectories and warmer colors indicate stereotypy between trajectories.

Surgical procedures, neural recordings and spike sorting

Once the bats were trained on the task, a surgery was performed to implant a 16-channel silicon probe (Neuronexus). The probe consisted of four shanks spaced 100 μm micrometers apart, with four recording sites also spaced 100 μm apart on each shank, resulting in a 300 × 300 square μm grid of recording sites. The silicon probe was connected by a ribbon cable to an electrical connector (Omnetics), and this assembly was then mounted on a custom-made, manual microdrive so that it could be moved through the dorsal/ventral axis (i.e. across layers) of the superior colliculus during the experiment. The silicon probe and microdrive assembly was then mounted on the head of the bat over a craniotomy performed above the superior colliculus (SC). The SC sits on the dorsal surface of the brain of the big brown bat (Valentine and Moss, 1997; Valentine et al., 2002), allowing for skull surface landmarks to be used in determining the implant location. Once the recording implant was positioned, a cap was made with cyanoacrylate (Loctite 4013) to protect and secure the implant to the skull surface. The bat was allowed several days to recover, and then we started running the neural recording experiment.

In order to study neural activity in the superior colliculus during a real-world navigation task, a wireless neural-telemetry system (Triangle BioSystems International) was used in conjunction with a multi-channel neural acquisition platform (Plexon). This allowed for chronic neural recordings to be collected from the superior colliculus (SC) while the echolocating bat was navigating around obstacles in flight. During the experiment, a wireless RF telemetry board (Triangle BioSystems International) was connected to the plug of the silicon probe mounted on top of the bat’s head. Bat 1 flew for 12 sessions while recordings were made in the SC, and Bat 2 flew for 15 sessions. Each session typically lasted 30–45 min, and the microdrive was advanced at the end of each session to collect activity from a new set of neurons in the following recording session.

Neural data were sorted offline after filtering between 800 and 6000 Hz using a 2nd order elliptic filter. Filtered neural traces were then sorted using a wavelet based algorithm and clustering technique (Quiroga et al., 2004). Furthermore, we determined the Lratio and isolation distance for each wavelet-based cluster in order to provide a traditional measure of the efficacy of our clustering technique. In previous reports, an Lratio less than 0.07, and an isolation distance more than 15, were used as thresholds for significantly separated spike-waveform clusters (Saleem et al., 2013; Schmitzer-Torbert et al., 2005). For our wavelet-based clustering technique, all Lratio’s were less than 0.05, and isolation distances were greater than 15 (Figure 1—figure supplement 3), providing a secondary quantitative metric of the significant separation of our single unit clustering.

This algorithm also separated movement artifact out of the raw neural traces. If any spike events occurred simultaneously with movement artifact, however, they were not recoverable. Movement artifact rarely occurred across all channels during flight and was mostly confined to times when the bat was landing. We only used data from the bats in flight for analysis. Of all sorted single units (n = 182), 67 units (sensory neurons) were selected for analysis, as described below. The isolated single units were stable throughout the session (see Figure 1—figure supplement 2).

Analysis of audio recordings

Audio recordings were analyzed using custom Matlab software to extract the relevant sound features, that is, pulse timing, duration, and interval. Combining the pulse timing (time when sound reached a stationary microph one) with the 3D flight trajectory data allowed compensating for the sound-propagation delays and calculating the actual call production times at the source (i.e. the veridical time when the bat produced the sonar sound).

Identification of sonar sound groups

Sonar sound groups (SSGs) are defined as clusters of two or more vocalizations which occur at a near constant PI (within 5% error with respect to the mean PI of the sound group), and are flanked by calls with a larger PI at both ends (at least 1.2 times larger) (Kothari et al., 2014; Moss and Surlykke, 2001; Moss et al., 2006). SSGs of two vocalizations are also produced by the bat, and our criteria for these SSGs is that surrounding PI’s must be at least 1.2 times larger than the PI between the two vocalizations contained within the SSG. Here, we use the same definitions and thresholds as used in prior work (see Figure 6A for a visual explanation). As we use pulse rate in the main text, it is important to note that Pulse Interval = 1/Pulse Rate.

Echo model

The ‘echo model’ is an acoustic model, which takes into account the instantaneous 3D position of the bat, 3D positions of the objects, the bat’s head direction vector, time of production of the sonar sound as well as the physical parameters of sound in air, in order to compute the direction and time of arrival of echoes at the bat’s ears. For this model, each time the bat vocalized, we computed the arrival time and direction of returning echoes.

Figure 2—figure supplement 1A shows an outline of a bat with the neural telemetry headstage (TBSI). The headstage is shown as a grey box with a 16-channel Omnetics connector (male and female) at the bottom. Three reflective markers (4 mm diameter), P, Q and R (black), which are tracked by the infrared motion tracking cameras (Vicon) are also shown. A top view (cartoon) of the bat and telemetry headstage, with markers is shown in Figure 2—figure supplement 1B.

Reconstruction of 3D flight trajectory, head aim and egocentric axes

The bat’s flight trajectory was reconstructed by computing the centroid (geometric center) of the three markers on the head stage. In case of missing points, only the points visible to the motion tracking system were used. The three points (P, Q, R) on the head stage were arranged as a triangle, with two of the points (Q and R) at the trailing edge of the headstage (Figure 2—figure supplement 1A and B), and marker P at the front of the headstage. The 3D head aim of the bat was computed by first calculating the midpoint (P’) of QR- and then constructing PP' along the mid line of the head (Figure 2—figure supplement 1B, head aim vector is shown as a dashed red arrow).

(1) px^= PP|PP| (head aim unit vector)

The z-direction of the egocentric axes was computed as the cross product of PQ and PR.

(2) pz^= PQ X PR|PQ| X |PR|

Further, the y-direction of the egocentric axes was computed as the cross product of px and pz.

(3) py^= pz^ X px^

Where X denotes cross product between vectors.

We refer to the above instantaneous egocentric coordinate system (px, py, pz) as the ‘local’ coordinate system and the coordinate system from the frame of reference of the motion capture cameras as the ‘world’ coordinate system (PX, PY, PZ). An example of a reconstructed flight trajectory is shown in Figure 2C. This trajectory is in the ‘world’ coordinates shown as the X, Y, Z axes (red, green and blue colors respectively) at the left corner of Figure 2C. The bat’s head aim during vocalizations (solid yellow circles on the flight trajectory) is indicated by black lines. Figure 2C also shows two example points, P(x1, y1, z1) and Q(x2, y2, z2), in the bat’s flight trajectory when the bat produces sonar calls. [px, py, pz] and [qx, qy, qz] (red, green, blue respectively) are the axes which form the ‘local’ instantaneous egocentric coordinate system (computed as per Equations 1, 2 and 3) with respect to the bat’s current position in space and head aim.

To compute the instantaneous microphone, object and room boundary coordinates from the ‘world’ coordinate system to the ‘local’ instantaneous egocentric coordinate system, translation and transformation of points are performed using quaternion rotations (Altmann, 2005).

For example, if A(Xa, Ya, Za) are the coordinates of an object in the global coordinate system (PX, PY, PZ). Then the new coordinates A(xa,ya,za) of the same object with respect to the instantaneous egocentric coordinate system (px, py, pz) are computed as below (4).

(4) A(xa, ya, za)=ROT(PX, PY, PZ)(px, py, pz)(Xa, Ya, Za)

Steps to compute direction and time of arrival of echoes at the bats ears

Once the Euclidian object coordinates are transformed into the instantaneous Euclidian coordinate system Axa, ya, za, unit vectors of object directions are computed (5) and the direction angles of echo source locations can be computed by transforming from the Euclidian coordinates to spherical coordinate A(θ,φ,R) (azimuth, elevation, range) as given in (6).

(5) a^= A(xa, ya, za)|A(xa, ya, za)| (unit vector)

The range of the object is simply the distance between the bat’s instantaneous location and the object.

(6) θ= sin1(a^.px^) and φ= sin1(a^.py^), Range (R)= |A(xa, ya, za)|

Time of arrival of echoes at the bat’s ear is computed as given in (7).

(7) Tarr=2Rcair, where cair is the speed of sound in air

Figure 2D shows how the instantaneous solid angle of the bat’s head aim vector to each object changes as the bat flies through the room. The data here refers to the flight trajectory shown in Figure 2C. Figure 2E shows the echo arrival times at the bat’s ears as computed by the echo model. Figure 2F and G show the room, objects and microphones from the bat’s egocentric point of ‘view’ as computed using the echo model. These figures correspond to the highlighted points, P and Q, in Figure 2C. The egocentric y and z axes are marked in green and blue respectively. The head aim vector (x-axis) is going into the plane of the paper and is denoted by a red circle.

Error analysis of the 3D head-aim reconstruction

As the dimensions of the headstage were known and remain fixed over the period of the experiment, tracking errors due to the motion capture system is simplified. For example, the distance between the P and Q head markers was 21 millimeters (see Figure 2—figure supplement 1B). We allowed a maximum error of 1 millimeter. Tracked points that exceeded this error threshold were excluded from the analysis. In reality, the error in distance between markers is actually a distributed error in the position of the two markers (P and Q in this case). We show this error as grey spheres/discs around each marker in Figure 2—figure supplement 1B. The head-aim is reconstructed as the vector PM. To compute the maximum and average error in the estimation of the head-aim vector, it is important to estimate the error in computing the midpoint of QR-. We compute this error by first estimating the errors in the coordinates of M.

For simplicity, let us consider a 2D case and let M be the origin as shown in Figure 2—figure supplement 1C. Hence, the coordinates of Q and R can be written as (-L, 0) and (L, 0), respectively. Where, 2L is the length of QR-. Let us consider points Q'xQ', yQ' and R'xR', yR' which belong to the circles of radius ‘r’ centered at Q and R, respectively and point M'xM', yM' which is the midpoint of Q'R'-. Here ‘r’ is the maximum allowed error in distance estimation of QR- (See Figure 2—figure supplement 1C). Equations of circles can be written as below (8)

(8) (xQ+L)2+ yQ2 r2 and (xRL)2+ yR2 r2

Adding these equations and rearranging the terms we can rewrite the final equation as

(9) ((xQ+xR)2)2+ ((yQ+yR)2)2 (xM2+ yM2)  r22L22+ |MQ||MR|cosα2

Where α is the angle between the vectors MQ' and MR' as shown in Figure 2—figure supplement 2C. Solving the equation for the extreme cases when α is 0 or 180 degrees shows that equation (9) reduces to (10) proving that the error in the estimation of the midpoint M’ is also a sphere/circle of radius ‘r’.

(10) xM2+ yM2 r2

Figure 2—figure supplement 1D shows the head-aim vector as PM and the grey circles around each point as the error in the position of each marker. In the 2D case, as shown in Figure 2—figure supplement 1D it is easy to prove that the maximum angular error in the estimation of the head-aim vector is the angle between PM and T1T2, where T1T2 is the line tangent to both maximum error circles (indicated in grey) and is can be computed as given in (11).

(11) βerr(max)= sin1rL=5.45

Error analysis of the point object approximation

When estimating echo arrival times and echo source locations, all objects are assumed to be point objects and sources. Figure 2—figure supplement 2A shows the cross-section of a cylindrical object, which was used as an obstacle in the bat’s flight path. The error in the estimation of echo arrival time depends on the position of the bat with respect to the object. Figure 2—figure supplement 2B shows how the error in estimation of echo arrival changes as a function of the angle (θ) between the bat’s position and the object’s horizontal axis, as shown in Figure 2—figure supplement 2A. Figure 2—figure supplement 2C shows a computation of the accuracy of the echo model as a function of the position of the bat as it moves around the object in a sphere of 2 meters. To summarize, the minimum and maximum errors in time of arrival of the echo at the bat‘s ears, due to the point object approximation are 0.35 milliseconds and 0.68 milliseconds.

Echo model validation

The echo model was verified using two different approaches, as detailed below.

  1. We Broadcast sounds from a speaker and recorded echoes reflected back from objects using a microphone (shown in Figure 2—figure supplement 2D). Here, the distance to the object from the microphone/speaker is ‘d’ while ‘L’ is the distance used by the echo model due to the point object approximation. This introduces a systematic error of ‘L-d’ in the time of arrival of the echo. In this setup the reflecting object was placed at different distances from the speaker and microphone and recorded echo arrival times were compared with the arrival times computed by the echo model. Figure 2—figure supplement 2E shows spectrograms of microphone recordings when the object was placed 0.7, 1.2 and 1.8 meters away from the recording microphone. The results matched the theoretical error bounds (as discussed above and shown in Figure 2—figure supplement 2A,B and C) within an error less than 0.1 milliseconds (Figure 2—figure supplement 2F).

  2. A 14-channel microphone array was placed on the wall opposite to the flight direction of the bat. As the bat navigated around objects in its flight path, the microphone array recorded echoes reflected off of objects. Using Time of Arrival of Difference (TOAD) algorithms (Madsen and Wahlberg, 2007), the 3D locations of the echo sources were computed and matched with the locations computed by the echo model (see supplementary video SV1 and SV2).

Classification of neurons into sensory, sensorimotor and vocal-premotor cells

In order to classify neurons, we developed an algorithm based on variability in the firing latency distributions of spike times with respect to echo arrival time, previous call production time, and next call production time. In simple terms, this algorithm measures the variability in spike latencies to echo time and call time (previous and next) as a way of classifying neurons as sensory, vocal premotor or sensorimotor. This determination was based on the assumption that a neuron’s activity is most temporally coupled with its functionally relevant event. If a neuron’s spike latency distribution was sharpest with respect to echo arrival time, it was classified as sensory; if spike latencies were sharpest with respect to pulse time, the neuron was classified as vocal premotor, and if spike latencies showed clustering around pulse time and echo arrival times, it was classified as sensorimotor. It is important to mention that for sensory neurons we further solved the problem of echo assignment by only considering neurons that fire for the first arriving echo and do not exhibit activity for subsequent echo events (see Figure 3). This also solves the problem of wall/camera/microphone echoes, as they were the last to arrive. More than 90% of the sensory neurons analyzed in this study responded only to the first echo. For the remaining neurons that responded to a cascade of echoes (about 10% of those sampled), it was not possible to reliably assign their activity to specific echo arrival times and we therefore excluded them from the data reported in this paper. Using this algorithm, the 182 recorded neurons were classified as sensory (n = 67), vocal premotor (n = 26) and sensorimotor (n = 45). Classification into sensory, sensorimotor and premotor categories is common for SC neurons (Mays and Sparks, 1980; Schiller and Koerner, 1971). The remaining 44 neurons were unclassified. Spatial tuning profiles were only constructed for the sensory neurons (n = 67).

Construction of spatial response profiles

Once a neuron was identified as sensory (see above criterion), direction information from the echo model was converted into egocentric coordinates of the bat’s instantaneous position and the X, Y and Z information was converted into azimuth, elevation and range coordinates. Further, we test spatial selectivity based on an ANOVA (p<0.05) performed along each dimension (azimuth, elevation and range). Only cells which passed the ANOVA for each dimension were used for further analysis. Neural responses of cells that passed the spatial selectivity test were normalized based on the amount of coverage in each of these dimensions, as explained below.

The spatial response profiles (for neurons which pass the spatial selectivity test (see above) were then normalized using the stimulus space, that is, the time spent by the animal, in each dimension (see Figure 3—figure supplement 1D – range, E – azimuth and F – elevation): that is, the spike-count spatial response profile was divided by the time-spent spatial profile, to yield a spiking probability per bin in each dimension (distance, azimuth, and elevation). Regions of the stimulus space with echo events per bin less than one standard deviation from the mean were excluded from the computations (indicated by open bins in Figure 3—figure supplement 1D, E and F). Finally, normalized spatial response profiles in each dimension were then fit to a Gaussian function using the fit function in Matlab. Spatial response profile means, half widths and standard deviations are then taken from the Gaussian fit.

Out of the 67 sensory neurons (see criterion above), overlapping populations of neurons showed either 3D, 2D or 1D spatial selectivity. 46 neurons (Bat 1–19 and Bat 2–27) showed spatial selectivity in 3D (azimuth, elevation and depth). Further, 56, 52 and 51 neurons showed 1D spatial selectivity, for depth, azimuth and elevation, respectively. Figure 4—figure supplement 3 describes the complete distribution of 3D, 2D and 1D neurons. The mean response latencies of single sensory neurons we recorded was 5.9 ± 3.4 ms. In more detail, the minimum spike latency was 3 ms and the minimum s.d. of latency was 1 ms. The median s.d. of the response latencies for the 67 sensory neurons was 3.8 ms. Previous publications have reported a wide range of response latencies in the SC of the passively listening bat, as long as 40 ms, but also as short as 4 ms (Valentine and Moss, 1997), 3.6 ms (Jen et al., 1984) and 4 ms (Wong, 1984), and short latency responses are likely mediated through a direct projection from the nucleus of the central acoustic tract to the SC (Casseday et al., 1989).

Stability of 3D spatial receptive fields

Further, we determined the stability of receptive fields for individual 3D tuned neurons (n = 46) by comparing the spatial tuning for the first and second half of recording sessions. 37 neurons showed significant 3D spatial selectivity for both the first and second half (see above methods for details). Firing is sparse in the auditory system of echolocating bats, we believe that because of this sparse firing, nine neurons (out of 46) did not show significant spatial tuning (in either the first or second half of the recording session) as a result of limited amount of data in either the first or second half of the recording session. On comparing the selectivity for the first and second half of the recording session, 33 neurons did not show any change in peak tuning along any dimension. Only four neurons showed a significant change in tuning across the session (two in the distance dimension and one each in azimuth and elevation dimensions), thus demonstrating that a majority of the neurons have stable receptive fields across the recording session. Figure 4—figure supplement 1, shows the stability of spatial tuning for the depth dimension. Red dots indicate neurons that show a significant change in depth tuning across the first and second half of the recording session.

Neural selectivity was analyzed only with respect to spatial selectivity along the X, Y, and Z dimensions. The bat’s echolocation calls are wide-band frequency modulated sounds, which are well suited to evoke activity from SC neurons that respond well to broadband acoustic stimuli. Since variations in the bat’s own calls evoked echoes that stimulated SC neurons, we could not systematically analyze responses to other stimulus dimensions, such as sound frequency or intensity. Stimulus selectivity of SC neurons in the bat to non-spatial acoustic parameters will be the topic of a future study.

SSG and non-SSG analysis

Separate range tuning profiles are computed for each cell for SSG and non-SSG vocalizations. Variance (sharpening) of SSG and non-SSG tuning profiles was tested using the non-parametric Brown-Forsythe test of variance at the α level of 0.05. The test results for each cell are described in detail in table supplementary table 1 (also see Figure 5E). Also, SSG and non-SSG distance tuning curves were tested using the Wilcoxon rank-sum test. Test statistic details for each cell is given in table supplementary table 2 (also see Figure 5F).

Power analysis of sample sizes for the SSG and non-SSG spatial tuning comparisons

The firing of auditory neurons in the echolocating big brown bats is very sparse (see for example Dear et al., 1993; Valentine and Moss, 1997). For the SSG and non-SSG analysis (above) we separated spiking activity when the bat produced SSGs and nonSSGs. This resulted in some of the data sets containing low spike counts. To ensure that for each comparison, for each neuron, we had enough statistical power, we performed a permutation test. Here, we combined the data for SSG and nonSSG data sets and randomly shuffled and picked spikes (without repetitions). Following this, we performed the Brown-Forsythe test or the Wilcoxon rank-sum test, for the sharpening and shifting groups, respectively. We repeated this procedure 1000 times and each time we collected the value of the test statistic. Finally, we compared the test statistic value of the original sample to the distribution obtained from the shuffled groups and obtained a p-value. We only included in the analysis the cells, which passed the test at the p<0.05 criterion level, which excluded 3/56 cells from Figure 5E and 5/56 cells from Figure 5F.

Local field potential

The local field potential (<300 Hz) was extracted from each channel recording using second order elliptical filters. Further, we analyzed the gamma band (50–140 Hz) (Goddard et al., 2012a; Sridharan and Knudsen, 2015) to investigate whether the epochs when the bat produced sonar sound groups (SSGs) were correlated with gamma band activity. We first identified channels without distortions in the LFP as a result of movement artifact (Figure 6—figure supplement 2). We then extracted 100 ms spike triggered LFP windows from corresponding recording sites. We separated these into SSG and non-SSG events and averaged these separately to estimate the root mean squared (RMS) gamma band power (Jaramillo and Zador, 2011) (Figure 6A and B) when the bat produced SSG and non-SSGs. Further, to investigate the timing of the gamma signal, the averaged gamma band amplitude envelope was normalized across SSG and non-SSG trials across each neuron. A Gaussian was fit to each time waveform to estimate the peak (Figure 6C and D). The average of the peaks across all units was taken as the average latency of the LFP following the spike event.

We also examined whether movement artifact from the bat’s wing beats could have corrupted the LFP analysis. The bat’s wingbeat is approximately 12 Hz, whereas the frequency range for the Gamma band we analyzed was 50–140 Hz. The third harmonic of the wingbeat, which would be in the frequency range of the Gamma band, was significantly attenuated. To further ensure that movement artifact did not corrupt the analysis of the LFP, we chose channels where the power ratio between the low frequency band (10–20 Hz) and the gamma band was less than 6 dB. We identified 21 low noise channels containing 26 single neuron recordings, (see Figure 6—figure supplement 2), which were then used for further analysis.

Data and code availability

The original raw data can be obtained upon request from NBK, MJW or CFM (cynthia.moss@jhu.edu). Given the size of the raw data (approx.. 2 terabytes), the full dataset has not been deposited to a public repository, but partial and processed data sets to generate Figures 5E, 5F, 6C, 6D and E have been made available through an open source license on GitHub (Kothari et al., 2018 copy archived at https://github.com/elifesciences-publications/Dynamic-3D-auditory-space-in-bats).

References

  1. 1
    Rotations, Quaternions, and Double Groups
    1. S Altmann
    (2005)
    Mineola: Courier Corporation.
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
    Behavioral modulation of tactile responses in the rat somatosensory system
    1. EE Fanselow
    2. MA Nicolelis
    (1999)
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 19:7603–7616.
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
    Listening in the Dark: The Acoustic Orientation of Bats and Men
    1. DR Griffin
    (1958)
    New Haven: Yale University Press.
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
    Auditory and visual maps of space in the optic tectum of the owl
    1. EI Knudsen
    (1982)
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 2:1177–1194.
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
    The Neurology of Eye Movements
    1. J Leigh
    2. DS Zee
    (1983)
    Oxford University Press.
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
    Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4
    1. CJ McAdams
    2. JH Maunsell
    (1999)
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 19:431–441.
  59. 59
  60. 60
  61. 61
    A neural code for auditory space in the cat’s superior colliculus
    1. JC Middlebrooks
    2. EI Knudsen
    (1984)
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 4:2621–2634.
  62. 62
  63. 63
  64. 64
    Behavioral Studies of Auditory Information Processing
    1. CF Moss
    2. H-U Schnitzler
    (1995)
    In: A. N Popper, R. R Fay, editors. Hearing by Bats. New York: Springer. pp. 87–145.
    https://doi.org/10.1007/978-1-4612-2556-0_3
  65. 65
    Auditory scene analysis by echolocation in bats
    1. CF Moss
    2. A Surlykke
    (2001)
    The Journal of the Acoustical Society of America 110:2207–2226.
    https://doi.org/10.1121/1.1398051
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
    Encoding of target range and its representation in the auditory cortex of the mustached bat
    1. WE O'Neill
    2. N Suga
    (1982)
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 2:17–31.
  74. 74
  75. 75
    Orienting of attention
    1. MI Posner
    (1980)
    Quarterly Journal of Experimental Psychology 32:3–25.
    https://doi.org/10.1080/00335558008248231
  76. 76
  77. 77
    Competitive mechanisms subserve attention in macaque areas V2 and V4
    1. JH Reynolds
    2. L Chelazzi
    3. R Desimone
    (1999)
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 19:1736–1753.
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
  86. 86
    The resolution of target range by echolocating bats
    1. JA Simmons
    (1973)
    The Journal of the Acoustical Society of America 54:157–173.
    https://doi.org/10.1121/1.1913559
  87. 87
  88. 88
  89. 89
  90. 90
  91. 91
  92. 92
  93. 93
  94. 94
    Bat Bioacoustics
    1. A Surlykke
    2. JA Simmons
    3. CF Moss
    (2016)
    265–288, Perceiving the World Through Echolocation and Vision, Bat Bioacoustics, New York, NY, Springer, 10.1007/978-1-4939-3527-7_10.
  95. 95
  96. 96
  97. 97
  98. 98
  99. 99
    Spatially selective auditory responses in the superior colliculus of the echolocating bat
    1. DE Valentine
    2. CF Moss
    (1997)
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 17:1720–1733.
  100. 100
  101. 101
  102. 102
  103. 103
  104. 104
  105. 105
  106. 106
  107. 107
  108. 108
  109. 109
  110. 110
  111. 111
  112. 112
  113. 113

Decision letter

  1. Catherine Emily Carr
    Reviewing Editor; University of Maryland, United States

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat" for consideration by eLife. Your article has been favorably evaluated by Andrew King (Senior Editor) and three reviewers, one of whom is a member of our Board of Reviewing Editors. The reviewers have opted to remain anonymous.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this letter to assess your responses to the concerns raised by the reviewers.

Summary:

The authors have developed a telemetry recording device mounted on a big brown bat and recorded responses from neurons in the superior colliculus to the bat's own echolocation calls in free flight. This is a major finding, requiring use of a light weight telemetry device, and demonstrating delay-tuned neural responses to echoes from physical objects. Previous work had used simulated pulses and echoes. Other significant results are the demonstration that echo clusters are correlated with sharpened responses.

Essential changes:

Despite these achievements, there are some major concerns. Data are reported from only 2 bats, and only 41 single units are selective to pulse-echo pairs. These are not described in sufficient detail to evaluate the results. Detailed questions are provided with the individual reviews, which are provided below. How can the authors show that the bats were actually using their sonar system for performing the task at hand and hence attending and actively interpreting those signals? It is not clear whether the responses were recorded from bats prior to training to fly around the room, or during, or after training. How many days did each bat wear the telemetry device? Were neurons recorded when the bat was passively listening to sounds? Could echoes ever be recorded in order to precisely determine echo-response latencies?

The second major issue concerns the model, which has been used to replace recording of echoes. This is described as technical advance, and appears to be fundamental to the analysis of neural responses. Nevertheless, insufficient details are provided to validate the model. Your flight room was lined with acoustic foam to reduce multiple echoes from the surroundings; however, surely the bat must have known the position of the walls etc. from its own echolocation? Were there (maybe faint) echoes from the floor or the camera/microphones?

When referring to the sharpening of the neuronal response and the shift of the response towards closer targets during grouped pulses you emphasize the potential role of attention, and attention-guided sonar pulse emission. You also argue that neuron's best delay (target distance) is shifted to shorter delays when the bat is engaged in the production of SSGs. A detailed analysis of the neural responses of single neurons during SSGs across different locations in the room would be important for accurately assessing the observed correlated changes in neural activity.

The last major concern is that the authors clearly delineate the novelty of this study, since this was questioned by the reviewers.

Please also respond to the remaining reviewer concerns below.

Reviewer #1:

It is not clear whether the authors ever record the echoes that lead to their echo-evoked spikes, or just modelled them. The neural responses are described as responses to sensory locations in 3D egocentric space, but I cannot see the echoes on the figures.

Some of the claims of novelty seem overblown. The transformation from 2D to 3D spatial receptive fields is inherent in bat echolocation, and neurons tuned to different spatial locations and ranges have been described multiple times.

Reviewer #2:

In this study neuronal activity in the superior colliculus was recorded in free flying bats that have a wireless-multielectrode system implanted. Methodologically this study provides a breakthrough: For the first time it was possible to study spatial neuronal coding in an echolocating bat while the animal was freely moving and was confronted with several objects in its flight room. To be able to associate neuronal space coding with instantaneous occurrence of sonar echoes at the bat's ear, the authors employ a sophisticated model for predicting echo occurrence at the bat's ears that includes bat flight trajectory and head position filmed by several cameras, bat echolocation calls measured with a microphone array, and estimates of sonar beam width. This elegant technique is impressive and avoids having to put microphones on the bats' heads and it produces novel results: The authors convincingly show that the spatial focus of SC neurons sharpens and shifts to closer objects when the bat employs sonar sound groups, e.g. call doublets, to inspect its environment. The latter are indicative of increased attention. The Discussion could benefit from coverage of literature on sharpening and shift of spatial neuronal tuning in other auditory areas like the auditory cortex.

The study is very well done, it employs sophisticated new methods and its novel results clearly merit publication in eLife.

Reviewer #3:

The proposed manuscript by Kothari et al. describes their investigation concerning encoding properties of midbrain neurons for egocentric space from a freely moving animal. By doing so in free flying bats navigating via their echolocation system they could correlate the neural activity with the bats auditory responses to ongoing echo-sampling of the environment. Through the aid of a physics-based echo-model the authors revealed that not only do sensory neurons in the superior colliculus encode 3D egocentric space but they also appear to sharpen their tuning curve with respect to the bat ongoing behavior. The authors' finding support the hypothesis that a bat's echolocation system could enhance auditory spatial resolution comparable to how eye saccades and foveal fixation enhances it for the visual system. Their results show for the first time how sensory neurons sharpen their 3D spatial resolution during a natural spatial navigation task. While the results are exciting from that perspective there are many details that need to be addressed both at the level of the behavior as well as the level of the neural data in order to determine whether the presented data supports the authors conclusions. Furthermore, a more refined description of the novelty of this study would be helpful.

The description of the behavior, task and performance is severely lacking which hinders accurate assessment of the authors interpretation of the neural data. Considering the well-established modulation of fine behavioral parameters, training procedures and sensory input on neural activity in the superior colliculus (SC) these details carry a lot of weight in validating the conclusions proposed by the authors. A few notable examples are the following:

1) How can the authors show that the bats were actually using their sonar system for performing the task at hand and hence attending and actively interpreting those signals?

2) How reproducible was the flight behavior of the bats? If it was highly stereotyped could it be that the bat was relying more on spatial memory and hence only attending partially to a small fraction of the echo-pulses as "verification" sampling? If this is indeed the case then could it be that the observed differences by the authors are not a dynamic modulation of the spatial signal but more so a transition between different behavioral states. It would also be informative if the authors report the number of trails that go into each analysis, the spatial density of the flight behavior, as well as show all the three-dimensional flight paths from representative sessions so a better assessment of the animal's spatial behavior can be obtained.

3) How were the animals trained on this task and how well trained were the bats? Is it possible that the sharpening of the neural spatial resolution is dependent on how well a task is known, meaning a high expectancy of animal? Did the authors observe any changes in the sharpening on dynamic tuning of the neural activity during the days of poorer vs. better performance?

4) What was the reward contingency? Neural activity in SC is believed to be modulated by upcoming reward (see for example Ikeda and Hikosaka, Neuron, 2003). Did the authors observe different patterns of neural activity near rewarded (for example landing platform) vs. non-rewarded (for instance hanging obstacle)? Was the location of the obstacles changed between recording sessions and did this manipulation had an effect on the animal behavior and neural activity?

5) Can the authors show that the bats used echolocation rather than vision to perform the task? I assume the room was dark during the experiment but how dark was it (lux levels)? Can the authors show that the bat behavior was modulated by the echo-pulses or alternatively, that they did not use vision for attending to distal cues and echolocation for attending to more proximal cues (which might provide an alternative explanation for neural responses)? This information is important in order assess the extent to which these bats used different sensory modalities (audition vs. vision) in performing different aspects of this task. As a side note, most often high-speed video tracking systems employ infrared-red lighting. Is there data for Eptesicus fuscus's wavelength sensitivity?

6) Can the authors provide a more detailed description as to the type/size/shape of the objects used in their task? The results seem to show that the bats only needed a glimpse of the object and used this mainly to detect the objects' spatial positions, not needing more information of the spatial extent of an object. Was this indeed the case for most objects presented or were some inspected for longer durations of time? If so, did this influence the echolocation signals and corresponding neural activity?

7) What echolocation signals are actually being analyzed here? Is it only to returning echoes or also to "missed" echoes, meaning did the authors also look at all neural responses after an echolocation call was made independent of whether this resulted in a reflection or not?

In addition to the analysis of neural data with respect to behavior noted above, further basic information should be analyzed and provided by authors to better assess the quality of the presented data and its validity for the authors interpretation. For instance:

1) What was the quality of the sorting? How well were the clusters separated into single units? The authors should provide quantitative measures for the quality of their sorting (such as isolation index and Lratio).

2) Furthermore, the authors describe a threshold of 200 spikes for a cell to be included in the analysis. How was this threshold chosen and does this number only include spikes recorded during flight? The authors further set a seemingly arbitrary threshold of at least 20 spikes during SSG and non-SSG which results in a total sample of 20 neurons for this entire analysis? If this is indeed the case it seems rather concerning basing conclusions on 20 action potentials and such a low number of neurons. The authors should either increase their neuron count or show statistically the strength of their threshold as valid for avoiding low sample biases for this analysis.

3) How were the motion artifacts shown by the authors in Figure 1—figure supplement 1 characterized and corrected? How were these artifacts detected and were they constrained to more specific spatial locations (such as the landing platform) or equally distributed throughout the flight paths? Could these artifacts obscure the detection of action potentials in certain parts of the environment more than in others? What did the authors do to address the potential influence of such artifacts on the neural signals and especially for on LFP signal? For instance, motion artifacts can cause the distortion of electrical signals in low frequency bands. To what extent were gamma oscillations influenced by such interferences? If any patterns of wing motions were associated with SSGs then this could account for stronger fluctuations in LFP hence seemingly increasing the gamma power.

4) What part of the frequency band of gamma was analyzed? The authors define a very wide band (40-140Hz) but there is a clear distinction between low and high gamma frequency bands that should also be analyzed. Furthermore, did the authors observe differences both at the gamma band effect as a function of the recorded layer in the superior colliculus? (see for example the paper by Ghose, Maier, Nidiffer and Walace, Multisensory response modulation in the superficial layers of the superior colliculus, Journal of Neuroscience, 2014).

5) Did the selectivity of the neurons change as a function of anatomical location of the recorded neurons within the SC (such as more superficial vs. deeper layers of SC)?

6) Stability: Figure 1—figure supplement 1 the authors show some action potentials from a single session yet this does not address in a meaningful way the question of whether the neural features observed by the authors represent a stable phenomenon related to the bat echolocation alone or is it actually modulated by different factors which are changing during the task. For such analysis, the authors should compute the stability of the neural responses of all the analyzed neurons during equally spaced portions of each session and show quantitatively that the response profiles (such as tuning curves) remain stable throughout.

7) The definition of neural selectivity in unclear. What is the formal definition by which a neuron was determined to be selective along a particular dimension? Can the authors provide a formula and statistical description for how a neuron was selected for the analysis (the paper is primarily relying on just 41 sensory selective neurons)? How were sensory, sensorimotor and vocal-premotor neurons classified? How many were recorded from each bat? What statistical threshold was used to determine selectivity? Furthermore, was the selectivity of a neuron analyzed along any orthogonal set of dimensions which are different from the canonical x, y and z dimensions?

8) The authors argue that neuron's best delay (target distance) is shifted to shorter delays (closer objects) when the bat is engaged in the production of SSGs, suggesting that distance tuning is dynamically remapped when the bat actively inspects objects in its environment. So, what actually drives the observed change in neural activity? Is it the result of higher echolocation rates? Is it that the bat simply in a different behavioral mode? Is it the physical proximity of objects? A detailed analysis of the neural responses of single neurons during SSGs across different locations in the room, either near or further away from the target and at different echolocation rates would be important for accurately assessing the observed correlated changes in neural activity.

Lastly, while the animal model and experimental preparation are certainly exciting, the paper in its current form does not provide revolutionary new insight nor a methodological innovation beyond what has already been reported in previous studies. Neural recordings have already been done in freely behaving and flying bats and the dynamic encoding of spatial locations and attention by both single neurons and LFP (gamma oscillations in this case) have already been reported in a wide range of species. Yet, this paper does provide an important confirmation of previous hypothesis and data in the freely flying bat and for that it does provide important insight.

In detail, the fact that perceptual sensitivity of SC neurons changes in relation to spatial cues is not novel has already been shown previously in behaving animals (See for example Lovejoy and Krauzlis, 2017). Albeit in some cases in non-human primates the head is restrained, they are performing active sensing using their eyes which are freely moving. The authors further argue that most studies have studied only 2D spatial cues but the three-dimensional tuning properties of neurons in SC have also been shown previously in the same bat species in the same lab (Valentine and Moss, 1997), albeit this was not done in freely flying bats. A dynamic modulation of neural activity with respect to the bat echolocation signals have also been previously demonstrated both during behavior in the same species (Ulanovsky and Moss, Hippocampal, 2011) as well as during flight in a different species (Geva et al., Nature neuroscience, 2016). Yet these recordings were done in the hippocampus and not in the SC as in the present study. Hence, the importance of this work is that it brings many of these components together in the superior colliculus of the flying bat. This aspect is important as it allows addressing long standing hypotheses about the tuning properties of such neurons in echolocating bats. It is important however that the authors properly and clearly delineate the statements of novelty in this study. As a side note: The authors further make the statement in the Introduction that "past studies of the SC have largely focused on sensorimotor representation in restrained animals, leaving gaps in our knowledge about the influence of action and attention on sensory responses in freely moving animals". Yet this statement is also not entirely accurate. Studies in rodents have also provided important contribution on the role of sensory input for modulating action and neural activity in the SC and should be acknowledged (see for example Felsen and Mainen, 2008.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat" for further consideration at eLife. Your revised article has been favorably evaluated by Andrew King (Senior Editor) and three reviewers, one of whom is a member of our Board of Reviewing Editors.

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below, and with more details provided in the individual reviews. All of these points relate to the analysis and presentation of the data, and the reviewers agreed that the novelty and impact of this study will be clearer once this is done.

1) Quantification of the behavior (represented either by correlation of flight trajectories, heatmaps or a different method).

2) Since there are data from only two bats, perhaps the authors could show the distribution of the main results across those two bats (as one would do for non-human primates which often have two subjects) to allow the readers to compare the responses.

3) Provide the standard measurement of unit-isolation quality by showing the distribution of Lratio and isolation distance for all of the analyzed neurons and exclude neurons which are clearly multi-unit. The concern is that biases can emerge from noisy analysis relying on very low spike counts. The authors should provide a measure of what is a reasonable threshold of number of spikes for a single neuron such that it can be safely included in the different analysis, or show that their results do not depend on spike counts.

Reviewer #1:

The authors have been very responsive to my previous review. I accept their reasons for relying on their model for the timing of echo-evoked spikes.

With respect to neurons recorded, although data are still from only 2 bats, they have increased the number of neurons analyzed. The authors now report 182 single neurons recorded in the SC, with 67 being selective to pulse-echo pairs. The authors have answered questions about unit isolation with a citation to the Quiroga et al. 2004 paper, and also point out that data from both bats show similar results. More information should be provided about these units. The authors could provide data on their units to support divisions into sensory, motor, etc., including latency and rate, as well as spike sorting criteria.

In the revision, the authors provide more data on their SSG responses. It would be helpful to find out how the 26 units shown for the SSGs were classified? The only group of neurons that I can find with n=26 are the vocal premotor group. Are these the same units?

The other revisions greatly add to the paper, providing data on bat flight paths, and number of sessions in which responses were recorded.

Reviewer #2:

The manuscript has improved, the authors took into account all my questions/comments in their new manuscript. The manuscript could be published as it is.

Reviewer #3:

While I appreciate the response from the authors I still find the information provided lacking on the three main domains I described in my original review:

1) Behavioral data analysis and presentation: Still there is no detailed description and importantly, quantification of the behavior. The authors say that bats were released from different locations and did not exhibit stereotyped behaviors but as requested, they should show this. As requested previously, can the authors provide a quantitative assessment of the reproducibility (or lack-thereof) of both the starting positions and flight trajectories. This can, for example, be done in the form of a heat-map illustration the spatial distribution (in 3D space) of the flight trajectories. But I encourage the author to provide a different assessment of this important point as well such that their argument for more variable flight trajectories and starting positions are better supported.

Also, the information on reward contingency is lacking. How many trials in a session resulted in landing on a platform and how many did not? When the bats did not land on the platform were they also rewarded? Much more details of the behavior are required and at the moment not provided by the authors. Again, this information is important for addressing the nature of the neural responses as postulated by the authors and align their results with findings from other species (such as primates, rodents, etc…). This will allow the authors to engage a broader audience beyond the single species of bats.

2) Analysis of neural data: Despite the request in the previous round of reviews there is no quantification of the quality of the neural signal. As requested, the authors should provide some assessment of the quality of their neural data in the form of isolation indexes and Lratio. Again, such measures are fairly standard in neural analysis and would allow the readers to assess the data more properly. Also, the authors have now removed the threshold on the minimal number of action potentials for a neuron to be included in their analysis. This allowed them to increase the N of "valid neurons" without a quantification of the dependence of the sensitivity of the observed tuning curves on the number of action potentials included in the analysis. Furthermore, this was not requested by the reviewers and bring about concerns regarding conclusions being made based on very low number of spikes and without support that this cannot bias the result. I encourage the authors to reinstate the threshold and importantly, provide a quantitative threshold that would assure the results are not biased by low numbers of action potentials included in the analysis.

3) Novelty: The points raised by the authors are still in agreement with the fact that this paper very elegantly puts together pieces of data that have been previously reported elsewhere, either in bats or in other species and mostly serve as a verification of previous findings. Furthermore, some of the statements made by the authors are unclear. For example, when comparing to the work of Geva et al., the authors claim that the Egyptian bat does not dynamically modulate its sonar signal in response to its surrounding but the senior author of the paper is an author on a paper stating that it does (S. Danilovich, A. Krishnan, W. J. Lee, I. Borrisov, O. Eitan, G. Kosa, C. F. Moss, Y. Yovel (2015) Bats regulate biosonar based on the availability of visual information Current Biology 25(Feng et al., 1978), 1107-1125), which is puzzling. Furthermore, other studies from Yosef Yovel and Nacham Ulanovsky have demonstrated that this species does in fact dynamically change the directionality of its sonar beam in responses to the acoustic features of its environment, and the senior author of the current paper is also an author on that manuscript: Y. Yovel, B. Falk, C. F. Moss, N. Ulanovsky, (2011) Active control of acoustic field-of-view in a biosonar system, PLoS Biology, 9(Bichot et al., 2005): e1001147. (Open article).

In summary, while I am generally supportive of this important work I still feel that the authors should provide more detailed responses to the requests of the referees and frame their work better with respect to the vast knowledge on the neurophysiological properties of SC neurons across species. The latter would only benefit the authors as it will allow them to extend the impact and interest in their work to a broader audience, such as the readers of eLife.

https://doi.org/10.7554/eLife.29053.027

Author response

Essential changes:

Despite these achievements, there are some major concerns. Data are reported from only 2 bats, and only 41 single units are selective to pulse-echo pairs. These are not described in sufficient detail to evaluate the results. Detailed questions are provided with the individual reviews, which are provided below.

In our revised manuscript, we provide more details on the numbers of units, and the proportion of the total neurons that were sensory, sensorimotor, and motor. Since our original submission, we have substantially increased the neuron count. We now have data on a total of 182 neurons. Further, after classification (see Materials and methods subsection “Classification of neurons into sensory, sensorimotor and vocal-premotor cells” for details), we now have analyzed data from 67 sensory (auditory) neurons, 45 sensorimotor neurons, and 26 vocal premotor neurons. Out of the 67 sensory neurons, 46 showed 3D spatial receptive fields, the remaining 17 neurons were a mix of 2D and 1D spatially tuned neurons (details are included in the subsection “Construction of 3D spatial response profiles”, also, see Figure 4—figure supplement 2). Further, we would like to emphasize that data from both bats in the study show similar results.

How can the authors show that the bats were actually using their sonar system for performing the task at hand and hence attending and actively interpreting those signals?

In our experiments, the bats’ behavior was modulated by echo returns, and this is indicated by the adaptive changes in sonar call parameters with distance to objects (see Figure 1E), in line with findings reported in previous studies (Griffin, 1958; Simmons et al., 1979). The bats also produced sonar sound groups, clusters of echolocation calls, which are observed when bats inspect objects in space (Falk et al., 2014; Moss et al., 2006; Petrites et al., 2009; Sändig et al., 2014; Wheeler et al., 2016).

Additionally, the room lighting was outside the bat’s visible range: ERG data show that Eptesicus does not see wavelengths longer than 600 nanometers (Hope and Bhatnagar, 1979). Moreover, the bats were released at different locations in the room for each trial, which required them to use sonar echoes to steer around obstacles rather than a consistent or memorized flight path around objects in the room (also, see (Figure 1A) for more details). We have included these details in the revised manuscript (subsection “Experimental design”).

It is not clear whether the responses were recorded from bats prior to training to fly around the room, or during, or after training.

Recordings were taken after the bats had experience in experimental test room. Details have now been added to the text in the Materials and methods section (subsection “Experimental design”).

How many days did each bat wear the telemetry device?

After the chronic implant surgery each bat was given several days to rest and acclimate to the implanted device, after which they were able to fly and navigate around objects in the flight room. Data collection began after the animal was able to perform ~30 flight trials per session, which took place twice a day (morning and afternoon) in the experimental test room. Bat A flew for 12 sessions, and Bat B flew for 15 sessions. These details are now in the text (subsection “Experimental design”, last paragraph).

Were neurons recorded when the bat was passively listening to sounds?

The SC recordings were collected when the bats were navigating, and 3D spatial tuning of SC neurons was constructed off-line with the use of the echo model (see responses to reviewers). Since spatial tuning of neurons was computed off-line, it was not feasible within a 30-45 minute experimental session to compare echo-evoked activity in free-flying bats with passive responses to computer generated pulse-echo pairs constructed to mimic stimulus azimuth, elevation and delay parameters that elicited responses from neurons in flight. We included these details of the methods in the revised manuscript (subsection “Echo model - Reconstructing the instantaneous acoustic stimulus space at the ears of the bat”, second paragraph and subsection “Experimental design”).

Could echoes ever be recorded in order to precisely determine echo-response latencies?

We did not record echoes with a microphone on the bat’s head during the course of the experiment. Below we list reasons why it was not feasible to record echoes directly received at the ears of the free-flying bat to characterize 3D spatial receptive fields.

- Noise floor: The noise floor of microphones is inadequate to record all of the echoes that return to the bat as it vocalizes. This would restrict the construction of spatial receptive fields, as there would be ‘missing echoes’. The echo model overcomes this limitation.

- Single/multiple microphones: A single microphone can only give information about the time of arrival of strong echoes. To obtain echo direction information an array of microphones (at least 4) would be required to use time difference of arrival algorithms for localizing echo sources.

- Size and weight considerations: For bats weighing approximately 18 grams, any increase in the weight of the head mounted devices would seriously limit their flight behavior.

We report in our revision that we used a microphone array to record echoes to validate the echo model’s computation of the arrival time and direction of echoes from objects in the room (see below). We could therefore analyze neural response latencies to the computed arrival times of echoes. The mean response latencies of single neurons we recorded is 5.9 ± 3.4 ms. In more detail, the minimum spike latency was 3 ms and the minimum s.d. of latency was 1 ms. The median s.d. of the response latencies for the 46 sensory neurons was 3.8 ms. Previous publications have reported a wide range of response latencies in the SC of the passively listening bat, as long as 40 ms, but also as short as 4 ms (Valentine and Moss, 1997), 3.6 ms (Jen et al., 1984) and 4 ms (Wong, 1984), and short latency responses are likely mediated through a direct projection from the nucleus of the central acoustic tract to the SC (Casseday et al., 1989).

The second major issue concerns the model, which has been used to replace recording of echoes. This is described as technical advance, and appears to be fundamental to the analysis of neural responses. Nevertheless, insufficient details are provided to validate the model.

We have now included details of the echo model, which were previously in the supplementary material, in the main text. Further, we have elaborated on details of the echo model, as pointed out by the reviewers.

Your flight room was lined with acoustic foam to reduce multiple echoes from the surroundings; however, surely the bat must have known the position of the walls etc. from its own echolocation? Were there (maybe faint) echoes from the floor or the camera/microphones?

The reviewer is correct that the bats received echoes from the walls, especially at closer distances. In the revised manuscript, we have added further details about wall/camera echoes (subsection “Classification of neurons into sensory, sensorimotor and vocal-premotor cells”). Briefly, every call that the bat produced resulted in a series of echoes, which included echoes from ensonified objects, walls, cameras etc. However, due to the close proximity of the bat with respect to the objects, the object echoes were always the first to arrive at the bat’s ears. In our sensory cell classification, we only considered neurons that responded to the first echo from each vocalization, which arrived before wall/camera echoes. These details have been added to the manuscript.

When referring to the sharpening of the neuronal response and the shift of the response towards closer targets during grouped pulses you emphasize the potential role of attention, and attention-guided sonar pulse emission. You also argue that neuron's best delay (target distance) is shifted to shorter delays when the bat is engaged in the production of SSGs. A detailed analysis of the neural responses of single neurons during SSGs across different locations in the room would be important for accurately assessing the observed correlated changes in neural activity.

We have analyzed the bat’s production locations of SSGs and non-SSGs across the experimental flight room for individual sessions and the distributions for both are overlapping and distributed across the room. We would be happy to provide more details of this analysis if required by the reviewers.

We also show in our revised manuscript, that the spatial tuning of cells is stable across each recording session. In this context, we would like to mention that the sparseness in neural activity in the auditory system of big brown bats (Dear et al., 1993; Valentine and Moss, 1997) limits the spiking data we can use for this assessment on a per trial basis. We compared response profiles of neurons in the first and second half of recording sessions, and found that the vast majority of neurons (37 out of 46 neurons) had stable spatial receptive fields between the two halves of each recording session. These details are included in the text in the fourth paragraph of the subsection “Construction of 3D spatial response profiles”, also, Figure 4—figure supplement 1).

The last major concern is that the authors clearly delineate the novelty of this study, since this was questioned by the reviewers.

We have responded, in detail, to this concern by the reviewers (see below) and also list our argument here.

1) 3D sensory responses in a freely behaving bat to self-generated echoes from physical objects have never been previously demonstrated.

2) Our results are the first to show that 3D sensory space is represented in the brain of an animal interacting with objects in its physical environment, and importantly, that sensory representations are modified by adaptive vocal-motor behaviors.

3) Our study is significantly different from Geva et al.’s 2016 report of hippocampal place field remapping in Egyptian fruit bats exposed to different sensory environments (vision vs. echolocation). Not only do the data from the two studies come from recordings in different species and different brainstructures, but also in experiments with different dependent and independent variables.

4) While past studies have implicated the SC in spatial attention and perception through local inactivation (Lovejoy and Krauzlis, 2017; McPeek and Keller, 2004), our work differs in that we recorded activity in single SC neurons in freely behaving animals and demonstrate remapping and sharpening of 3D neurons with changes in the animal’s echolocation behavior.

5) The characterization of 3D response profiles in freely echolocating bats is an important breakthrough. For many decades, scientists have mimicked natural echolocation in restrained (often anesthetized) passively listening bats, without testing the validity of this approach.

6) Importantly, we believe that our research can inspire colleagues to conduct related studies in other species, which would advance a more complete understanding of nervous system function in the context of real-world, natural behaviors.

Please also respond to the remaining reviewer concerns below.

Reviewer #1:

It is not clear whether the authors ever record the echoes that lead to their echo-evoked spikes, or just modelled them. The neural responses are described as responses to sensory locations in 3D egocentric space, but I cannot see the echoes on the figures.

As stated above, we did not record echoes with a microphone on the bat’s head during the course of the experiment and we explained why it was not possible to record echoes directly received at the ears of the free-flying bat to characterize 3D spatial receptive fields.

- Noise floor: The noise floor of microphones is inadequate to record all of the echoes that return to the bat as it vocalizes. This would restrict the construction of spatial receptive fields, as there would be ‘missing echoes’. The echo model overcomes this limitation.

- Single/multiple microphones: A single microphone can only give information about the time of arrival of strong echoes. To obtain echo direction information an array of microphones (at least 4) would be required to use time difference of arrival algorithms for localizing echo sources.

- Size and weight considerations: For bats weighing approximately 18 grams, any increase in the weight of the head mounted devices would seriously limit the flight behavior.

This information is included in our revised manuscript. We also elaborate in our revision on echo measurements with a microphone array to validate the echo model’s computation of the arrival time and direction of echoes from objects in the room, which allowed us to analyze neural response latencies to the computed arrival times of echoes.

Some of the claims of novelty seem overblown. The transformation from 2D to 3D spatial receptive fields is inherent in bat echolocation, and neurons tuned to different spatial locations and ranges have been described multiple times.

This has been addressed briefly in our response to the Editor’s general comments and in more detail below in response to the comments of reviewer 3.

Reviewer #2:

In this study neuronal activity in the superior colliculus was recorded in free flying bats that have a wireless-multielectrode system implanted. Methodologically this study provides a breakthrough: For the first time it was possible to study spatial neuronal coding in an echolocating bat while the animal was freely moving and was confronted with several objects in its flight room. To be able to associate neuronal space coding with instantaneous occurrence of sonar echoes at the bat's ear, the authors employ a sophisticated model for predicting echo occurrence at the bat's ears that includes bat flight trajectory and head position filmed by several cameras, bat echolocation calls measured with a microphone array, and estimates of sonar beam width. This elegant technique is impressive and avoids having to put microphones on the bats' heads and it produces novel results: The authors convincingly show that the spatial focus of SC neurons sharpens and shifts to closer objects when the bat employs sonar sound groups, e.g. call doublets, to inspect its environment. The latter are indicative of increased attention. The Discussion could benefit from coverage of literature on sharpening and shift of spatial neuronal tuning in other auditory areas like the auditory cortex.

The study is very well done, it employs sophisticated new methods and its novel results clearly merit publication in eLife.

We thank the reviewer for these comments and have expanded the literature review in the Discussion of our revised manuscript.

Reviewer #3:

The proposed manuscript by Kothari et al. describes their investigation concerning encoding properties of midbrain neurons for egocentric space from a freely moving animal. By doing so in free flying bats navigating via their echolocation system they could correlate the neural activity with the bats auditory responses to ongoing echo-sampling of the environment. Through the aid of a physics-based echo-model the authors revealed that not only do sensory neurons in the superior colliculus encode 3D egocentric space but they also appear to sharpen their tuning curve with respect to the bat ongoing behavior. The authors' finding support the hypothesis that a bat's echolocation system could enhance auditory spatial resolution comparable to how eye saccades and foveal fixation enhances it for the visual system. Their results show for the first time how sensory neurons sharpen their 3D spatial resolution during a natural spatial navigation task. While the results are exciting from that perspective there are many details that need to be addressed both at the level of the behavior as well as the level of the neural data in order to determine whether the presented data supports the authors conclusions. Furthermore, a more refined description of the novelty of this study would be helpful.

We thank the reviewer for pointing out the shortcomings in our presentation of data collection and analysis, and we have made every effort to correct this by adding extensive detail to our Materials and methods and Results sections. We specify our changes below.

The description of the behavior, task and performance is severely lacking which hinders accurate assessment of the authors interpretation of the neural data. Considering the well-established modulation of fine behavioral parameters, training procedures and sensory input on neural activity in the superior colliculus (SC) these details carry a lot of weight in validating the conclusions proposed by the authors. A few notable examples are the following:

1) How can the authors show that the bats were actually using their sonar system for performing the task at hand and hence attending and actively interpreting those signals?

We thank the reviewer for pointing out this confusion in our manuscript. In our experiments, the bats’ behavior was modulated by echo returns, and this is indicated by the adaptive changes in vocal behaviors. Additionally, the infrared room lighting was outside the bat’s visible range: ERG data show that Eptesicus does not see wavelengths longer than 600 nanometers (Hope and Bhatnagar, 1979). Moreover, the bats were released at different locations in the room for each trial, which required them to use sonar echoes to steer around obstacles rather than a consistent or memorized flight path around objects in the room (also, see (Figure 1A) for more details). We have included these details in the revised manuscript (subsection “Experimental design”).

2) How reproducible was the flight behavior of the bats? If it was highly stereotyped could it be that the bat was relying more on spatial memory and hence only attending partially to a small fraction of the echo-pulses as "verification" sampling? If this is indeed the case then could it be that the observed differences by the authors are not a dynamic modulation of the spatial signal but more so a transition between different behavioral states. It would also be informative if the authors report the number of trails that go into each analysis, the spatial density of the flight behavior, as well as show all the three-dimensional flight paths from representative sessions so a better assessment of the animal's spatial behavior can be obtained.

We varied the locations of the four flight obstacles across recording sessions so that the bat could not use spatial memory for navigation, and also released the bats from different locations to avoid the development of stereotyped flight patterns. These details have been added to the revised manuscript (subsection “Experimental design”). Bat A flew for 12 sessions, and Bat B flew for 15 sessions. Figure 3—figure supplement 1 shows the bat’s flight trajectories in a single session and illustrates room coverage. Coverage was restricted in elevation, due to the height of the flight room, with a floor to ceiling dimension of approximately 250 cm. In the revision, we have elaborated on these methods to demonstrate the bats’ flight paths and coverage of the room.

3) How were the animals trained on this task and how well trained were the bats? Is it possible that the sharpening of the neural spatial resolution is dependent on how well a task is known, meaning a high expectancy of animal? Did the authors observe any changes in the sharpening on dynamic tuning of the neural activity during the days of poorer vs. better performance?

We have expanded the Materials and methods section of our manuscript to give a more detailed description of the behavioral task performed by the animals in our study. Briefly, two bats flew freely in a large experimental test room (6 x 6 x 2.5 m) and were fed mealworms throughout recording sessions to keep them active and motivated to fly, but the animals were not specifically rewarded for the task. The room was illuminated with long-wavelength lighting to preclude the bat’s use of vision, and the animals were released from different locations at the start of each trial to avoid the use of spatial memory for navigation. The obstacles the bats encountered in flight were four plastic cylinders (hard plastic as to be acoustically reflective), approximately 13 cm in diameter and 30 cm in length, and their positions were changed after each recording session. As mentioned previously, we also released the bats from different directions on each trial to emphasize the bats’ use of echolocation, rather than spatial memory. We therefore cannot examine our data from the perspective of ‘expectancy’ or ‘performance.’

4) What was the reward contingency? Neural activity in SC is believed to be modulated by upcoming reward (see for example Ikeda and Hikosaka, Neuron, 2003). Did the authors observe different patterns of neural activity near rewarded (for example landing platform) vs. non-rewarded (for instance hanging obstacle)? Was the location of the obstacles changed between recording sessions and did this manipulation had an effect on the animal behavior and neural activity?

The bats were not explicitly rewarded in our task; they were just fed after each trial (and every trial, regardless of the animal’s behavior) to keep them active and motivated to fly. We therefore cannot separate the data into rewarded vs. non-rewarded sections of the room, or based upon reward contingency. We have included these details in the revised manuscript (subsection “Experimental design”, first paragraph). As mentioned above, the location of objects was changed for each recording session.

5) Can the authors show that the bats used echolocation rather than vision to perform the task? I assume the room was dark during the experiment but how dark was it (lux levels)? Can the authors show that the bat behavior was modulated by the echo-pulses or alternatively, that they did not use vision for attending to distal cues and echolocation for attending to more proximal cues (which might provide an alternative explanation for neural responses)? This information is important in order assess the extent to which these bats used different sensory modalities (audition vs. vision) in performing different aspects of this task. As a side note, most often high-speed video tracking systems employ infrared-red lighting. Is there data for Eptesicus fuscus's wavelength sensitivity?

We thank the reviewer for the comment and agree that details of the room lighting are important to mention. These have now been added to the manuscript. Briefly, the room lighting was outside the bat’s visible range: ERG data show that Eptesicus does not see wavelengths longer than 600 nanometers (Hope and Bhatnagar, 1979). Our infrared cameras operate at a wavelength ~850 nanometers. These details are now included in the manuscript (subsection “Experimental design”).

6) Can the authors provide a more detailed description as to the type/size/shape of the objects used in their task? The results seem to show that the bats only needed a glimpse of the object and used this mainly to detect the objects' spatial positions, not needing more information of the spatial extent of an object. Was this indeed the case for most objects presented or were some inspected for longer durations of time? If so, did this influence the echolocation signals and corresponding neural activity?

As noted above, the obstacles were hard plastic cylinders that returned strong echoes to the free-flying bat. They were 13 cm in diameter, and 30 inches in length. Detailed descriptions of the obstacles are included in the revised manuscript. The animals did ‘attend’ to objects at different levels, as indicated by the production of SSG’s, and as our manuscript details, this increased spatial attention altered the tuning profiles of the neurons in our study.

7) What echolocation signals are actually being analyzed here? Is it only to returning echoes or also to "missed" echoes, meaning did the authors also look at all neural responses after an echolocation call was made independent of whether this resulted in a reflection or not?

In our experimental setup, all sonar calls resulted in echo returns. Spatial tuning profiles were computed for all neural responses within a trial. To further specify, for every call a bat produced, echo timings and directions were computed using the echo model and thus there would not be a case with ‘missed echoes’. The bat always received echoes each time it vocalized, either from a flight obstacle, the platform, the floor, ceiling, or a wall, which were used to construct spatial tuning profiles of sensory neurons. If a neuron showed a response, this entered into the calculation of spike probability for that location in space, whether there was a recorded echo or not. We will include these details in the revised manuscript. We have added this information to the Materials and methods subsection “Classification of neurons into sensory, sensorimotor and vocal-premotor cells”.

In addition to the analysis of neural data with respect to behavior noted above, further basic information should be analyzed and provided by authors to better assess the quality of the presented data and its validity for the authors interpretation. For instance:

1) What was the quality of the sorting? How well were the clusters separated into single units? The authors should provide quantitative measures for the quality of their sorting (such as isolation index and Lratio).

We used a wavelet-based clustering developed by Quiroga et al., 2004 to perform the sorting. This program uses a Monte Carlo simulation to establish significance in the separation of clusters and does not report cluster separation values (like Lratio or isolation index). In case the reviewers request a cluster isolation value, we would be happy to compute these and provide the values separately.

2) Furthermore, the authors describe a threshold of 200 spikes for a cell to be included in the analysis. How was this threshold chosen and does this number only include spikes recorded during flight? The authors further set a seemingly arbitrary threshold of at least 20 spikes during SSG and non-SSG which results in a total sample of 20 neurons for this entire analysis? If this is indeed the case it seems rather concerning basing conclusions on 20 action potentials and such a low number of neurons. The authors should either increase their neuron count or show statistically the strength of their threshold as valid for avoiding low sample biases for this analysis.

In our revision, we have added neurons to our data set (as explained above), provided details on how the thresholds were chosen, and added statistics to show that our results are not a result of a low sample bias. Briefly, the wavelet clustering requires a minimum cluster size threshold for isolating and separating clusters. Due to the sparse nature of firing in the bat auditory system we used a threshold of a minimum 50 spikes per cluster.

Further, we have removed the threshold of 20 spikes used for the SSG and non-SSG analysis. And now the SSG and non-SSG analysis includes all 56 distance tuned neurons. These details are now provided in the revised manuscript (subsection “Adaptive sonar behavior modulates 3D spatial receptive fields”).

3) How were the motion artifacts shown by the authors in Figure 1—figure supplement 1 characterized and corrected? How were these artifacts detected and were they constrained to more specific spatial locations (such as the landing platform) or equally distributed throughout the flight paths? Could these artifacts obscure the detection of action potentials in certain parts of the environment more than in others? What did the authors do to address the potential influence of such artifacts on the neural signals and especially for on LFP signal? For instance, motion artifacts can cause the distortion of electrical signals in low frequency bands. To what extent were gamma oscillations influenced by such interferences? If any patterns of wing motions were associated with SSGs then this could account for stronger fluctuations in LFP hence seemingly increasing the gamma power.

The wavelet based sorting method developed by Quiroga et al., 2004 can also be used to exclude motion artifacts from the neural recordings. In our experiments, the neural recordings from free-flying bats rarely showed motion artifact. As the reviewer speculates, there are some motion artifacts during landing, but we have excluded these time points from the analysis. Please see text (subsection “Surgical Procedure, neural recordings and spike sorting”, last paragraph) for further details.

The reviewer is correct in pointing out that wing beat artifacts could indeed influence lower frequency oscillations included in the LFP. We also examined whether movement artifact from the bats’ wing beats could have corrupted the LFP analysis. The bat’s wingbeat is approximately 12 Hz, whereas the frequency range for the gamma band we analyzed was 40-140 Hz. The 3rd harmonic of the wingbeat, which would be in the frequency range of the gamma band, was significantly attenuated. To further ensure that movement artifact did not corrupt the analysis of the LFP, we chose channels where the power ratio between the low frequency band (10-20 Hz) and the gamma band was less than 6 dB. We identified 21 such low noise, channels (see Figure 6—figure supplement 2), which were then used for further analysis. We have added this information about the possible influence of wing beat artifacts on the LFP analysis to the revised manuscript (subsection “Local field potential”).

4) What part of the frequency band of gamma was analyzed? The authors define a very wide band (40-140Hz) but there is a clear distinction between low and high gamma frequency bands that should also be analyzed. Furthermore, did the authors observe differences both at the gamma band effect as a function of the recorded layer in the superior colliculus? (see for example the paper by Ghose, Maier, Nidiffer and Walace, Multisensory response modulation in the superficial layers of the superior colliculus, Journal of Neuroscience, 2014).

We analyzed the gamma band between 40 and 140 Hz, as explained in the methods. The reviewer’s comment prompted us to now look at the gamma band effect as a function of the dorsal-ventral axis in the SC, and we do not see any change as a function of recording depth. These details are included in our revision (subsection “Gamma power increases during epochs of sonar sound group production”, first paragraph, also Figure 6—figure supplement 1).

5) Did the selectivity of the neurons change as a function of anatomical location of the recorded neurons within the SC (such as more superficial vs. deeper layers of SC)?

Our analysis does not show any change in neural selectivity as a function of anatomic location (depth of recording location). This has been included in the revision (subsection “3D spatial tuning of single neurons in the SC of free flying bats”, last paragraph and Figure 4—figure supplement 2).

6) Stability: Figure 1—figure supplement 1 the authors show some action potentials from a single session yet this does not address in a meaningful way the question of whether the neural features observed by the authors represent a stable phenomenon related to the bat echolocation alone or is it actually modulated by different factors which are changing during the task. For such analysis, the authors should compute the stability of the neural responses of all the analyzed neurons during equally spaced portions of each session and show quantitatively that the response profiles (such as tuning curves) remain stable throughout.

We agree with the reviewer’s comments and we show, in our revised manuscript, that the spatial tuning of cells is stable across each recording session. In this context, we would like to mention that the sparseness in neural activity in the auditory system of big brown bats (Dear et al., 1993; Valentine and Moss, 1997) limits the spiking data we can use for this assessment on a per trial basis. We compared response profiles of neurons in the first and second half of recording sessions, and found that the vast majority of neurons (37 out of 46 neurons) had stable spatial receptive fields between the two halves of each recording session. These details are included in the fourth paragraph of the subsection “Construction of 3D spatial response profiles”, also, Figure 4—figure supplement 1).

7) The definition of neural selectivity in unclear. What is the formal definition by which a neuron was determined to be selective along a particular dimension? Can the authors provide a formula and statistical description for how a neuron was selected for the analysis (the paper is primarily relying on just 41 sensory selective neurons)? How were sensory, sensorimotor and vocal-premotor neurons classified? How many were recorded from each bat? What statistical threshold was used to determine selectivity? Furthermore, was the selectivity of a neuron analyzed along any orthogonal set of dimensions which are different from the canonical x, y and z dimensions?

The reviewers have made the important point that the description of how sensory, sensorimotor and vocal-premotor neurons were classified is lacking in the original manuscript. We didn’t present information on sensorimotor and vocal premotor neurons in the original version of the manuscript, because we are preparing a separate paper that focuses on these classes of neurons in the bat SC. We provide a brief explanation below, and we have added a section, in the manuscript text, detailing the methodology of the classification (subsection “Classification of neurons into sensory, sensorimotor and vocal-premotor cells”).

In the case of a passively listening animal, it is straightforward to identify and characterize sensory activity. In head restrained animals, traditionally, sensory, sensorimotor and premotor cells are identified by separating the sensory and motor behaviors in time. This allows the experimenter to solve the problem of assigning neural activity to independent behavioral/sensory events. In a freely moving animal that is interacting with physical objects in its environment, there are challenges to analyzing neural activity in this way. To address this challenge, we developed an algorithm based on the firing latency distributions of spike times with respect to echo arrival time, previous call production time, and next call production time. This algorithm classifies neurons as sensory, sensorimotor and vocal premotor, based upon the temporal relationship between echo time and spike latency.

Once a neuron was identified as sensory (see above explanation), direction information from the echo model was converted into egocentric coordinates of the bat’s instantaneous position and the X, Y and Z information was converted into azimuth, elevation and range coordinates. After normalizing the neural responses based on the amount of coverage in each of these dimensions, we fit normal curves to these responses. We also performed an ANOVA to determine the significance of spatial tuning along each dimension. These details have been added to the manuscript (subsection “Construction of 3D spatial response profiles”).

Neural selectivity was analyzed only with respect to the X, Y, and Z dimensions. Since the bat’s own calls evoked echoes that stimulated SC neurons, we could not systematically analyze responses to other stimulus dimensions, such as sound frequency or intensity.

8) The authors argue that neuron's best delay (target distance) is shifted to shorter delays (closer objects) when the bat is engaged in the production of SSGs, suggesting that distance tuning is dynamically remapped when the bat actively inspects objects in its environment. So, what actually drives the observed change in neural activity? Is it the result of higher echolocation rates? Is it that the bat simply in a different behavioral mode? Is it the physical proximity of objects? A detailed analysis of the neural responses of single neurons during SSGs across different locations in the room, either near or further away from the target and at different echolocation rates would be important for accurately assessing the observed correlated changes in neural activity.

To examine this issue, we took pulse intervals of all SSG calls and partitioned them into low and high pulse interval (PI) classes. We then computed range tuning of neurons for echoes returning from SSGs produced at different call intervals and determined whether spatial responses were remapped or sharper. We do not find any significant difference in the range tuning between the low and high PI SSG groups. In other words, the remapping and sharpening observed for SSGs does not seem to be a result of higher echolocation rates but rather the bat’s behavioral mode.

Lastly, while the animal model and experimental preparation are certainly exciting, the paper in its current form does not provide revolutionary new insight nor a methodological innovation beyond what has already been reported in previous studies. Neural recordings have already been done in freely behaving and flying bats and the dynamic encoding of spatial locations and attention by both single neurons and LFP (gamma oscillations in this case) have already been reported in a wide range of species. Yet, this paper does provide an important confirmation of previous hypothesis and data in the freely flying bat and for that it does provide important insight.

In detail, the fact that perceptual sensitivity of SC neurons changes in relation to spatial cues is not novel has already been shown previously in behaving animals (See for example Lovejoy and Krauzlis, 2017). Albeit in some cases in non-human primates the head is restrained, they are performing active sensing using their eyes which are freely moving. The authors further argue that most studies have studied only 2D spatial cues but the three-dimensional tuning properties of neurons in SC have also been shown previously in the same bat species in the same lab (Valentine and Moss, 1997), albeit this was not done in freely flying bats. A dynamic modulation of neural activity with respect to the bat echolocation signals have also been previously demonstrated both during behavior in the same species (Ulanovsky and Moss, Hippocampal, 2011) as well as during flight in a different species (Geva et al., Nature neuroscience, 2016). Yet these recordings were done in the hippocampus and not in the SC as in the present study. Hence, the importance of this work is that it brings many of these components together in the superior colliculus of the flying bat. This aspect is important as it allows addressing long standing hypotheses about the tuning properties of such neurons in echolocating bats. It is important however that the authors properly and clearly delineate the statements of novelty in this study. As a side note: The authors further make the statement in the Introduction that "past studies of the SC have largely focused on sensorimotor representation in restrained animals, leaving gaps in our knowledge about the influence of action and attention on sensory responses in freely moving animals". Yet this statement is also not entirely accurate. Studies in rodents have also provided important contribution on the role of sensory input for modulating action and neural activity in the SC and should be acknowledged (see for example Felsen and Mainen, 2008: Neural substrate of sensory-guided locomotor decision in the rat superior colliculus).

We reiterate below why our study is novel and warrants publication in eLife:

1) 3D sensory responses in a freely behaving bat to self-generated echoes from physical objects have never been previously demonstrated.

2) While reports on 3D place fields in the hippocampus of the Egyptian fruit bat have provided an important advance in neuroscience, the hippocampus is implicated in allocentric space and memory-based vectorial representation. Here we report on an entirely different finding: 3D egocentric sensory responses. Our results are the first to show that 3D sensory space is represented in the brain of an animal interacting with objects in its physical environment, and importantly, that sensory representations are modified by adaptive vocal-motor behaviors.

3) Changes in hippocampal place field tuning at varying time intervals following sonar emissions were reported for crawling bats (Ulanovsky and Moss, 2011); however, this result does not bear on the novelty of the findings reported in our manuscript for two reasons: 1) The dynamics of hippocampal place cell tuning following sonar emissions were hypothesized to relate to echo processing time, but this was never empirically demonstrated in the 2011 paper. No sonar echoes were recorded, analyzed or computed from spatial coordinates, and 2) The echolocation behavior of the crawling bat did not show the natural call intervals or sonar sound groups of the free-flying bats. In particular, the call intervals used to analyze the hippocampal place field tuning in the Ulanovsky and Moss, 2011 study were 0-78 ms, 78-210 ms, 210-540 ms, and >540 ms, to equate the number of spikes used to construct place fields over different time periods following sonar calls. It is important to note that these intervals are far greater than those produced by flying bats engaged in spatial navigation and provide no indication of the animal’s behavioral state. Indeed, the call intervals produced by the bats in our current study ranged between 8 and 80 ms, falling almost entirely within the shortest time window used in the hippocampal place field tuning analysis.

4) We assert that the remapping of 3D echo-evoked responses in the midbrain SC of the echolocating big brown bat is entirely new and stands apart from Geva et al.’s 2016 report of hippocampal place field remapping in Egyptian fruit bats exposed to different sensory environments (vision vs. echolocation). Not only do the data from the two studies come from recordings in different species and different brain structures, but also in experiments with different dependent and independent variables: Geva et al. showed remapping of hippocampal place fields in Egyptian fruit bats tested in two distinct environments, while we report here remapping of sensory response profiles in the SC of big brown bats that adapted echolocation behavior to inspect objects within a single test environment. It is also worth noting that the Egyptian fruit bats are highly visual (in contrast to the big brown bat), produce tongue clicks for echolocation, not laryngeal calls, and this animal does not dynamically modulate its sonar signal design as it inspects objects in its surroundings. Therefore the sensory remapping results we report in this manuscript could not be obtained from the Egyptian fruit bat.

5) While past studies have implicated the SC in spatial attention and perception through local inactivation (Lovejoy and Krauzlis, 2017; McPeek and Keller, 2004), our work differs in that we recorded activity in single SC neurons in freely behaving animals and demonstrate remapping and sharpening of 3D neurons with changes in the animal’s echolocation behavior.

6) The characterization of 3D response profiles in freely echolocating bats is an important breakthrough. For many decades, scientists have mimicked natural echolocation in restrained (often anesthetized) passively listening bats, without testing the validity of this approach. Would a vision scientist not consider it critical to compare neural responses to visual stimuli in paralyzed animals viewing moving patterns with a behaving animal moving its eyes to scan a stimulus? Here, for the first time, we present data that not only demonstrates that auditory neurons in flying bats show 3D auditory spatial tuning, but also the discoveries that 1) tuning is sharper than reports from passively listening bats and 2) tuning is modulated by the bat’s echolocation behavior.

7) We thank the reviewer in drawing our attention to the work by Felsen and Mainen. We now discuss their work in the manuscript (Introduction, fourth paragraph).

8) Importantly, we believe that our research can inspire colleagues to conduct related studies in other species, which would advance a more complete understanding of nervous system function in the context of real-world, natural behaviors.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below, and with more details provided in the individual reviews. All of these points relate to the analysis and presentation of the data, and the reviewers agreed that the novelty and impact of this study will be clearer once this is done.

1) Quantification of the behavior (represented either by correlation of flight trajectories, heatmaps or a different method).

The question raised by the reviewers pertains to how much spatial memory is guiding the bat’s flight behavior. In the newly revised manuscript, we report quantitative analyses of the amount of stereotypy in the bats’ flight behavior, using an approach similar to Barchi et al. (2013) and Falk et al. (2014). Specifically, we performed 2D spatial cross correlations of occupancy histograms for every trial within a recording session. Using this technique, high correlation values indicate stereotyped flight paths from trial to trial, and these have been interpreted as an indication of the bat’s use of spatial memory (Barchi et al., 2013). Similarly, low correlation numbers have been interpreted as use of active sensing rather than spatial memory (Falk et al., 2014).

In our study, bats were released from different locations in the room on each trial, and our analysis shows (Figure 1—figure supplement 1) low correlation values in flight paths across trials. Based on this finding, we argue that the bats in our study relied on sensory input to guide their flight trajectories, and not spatial memory. We believe that a correlation analysis of flight paths is more appropriate than a heatmap to demonstrate this point. A heatmap only provides coverage information, and would therefore be insufficient to describe how stereotyped flights are from trial-to-trial as an indication of the use of spatial memory in navigation. In our experiment, coverage is important to the analysis of single neuron spatial tuning, and for this analysis, we show coverage across azimuth, elevation, and distance in Figure 3—figure supplement 1. This analysis has been included in the main manuscript (Results, first paragraph and subsection “Analysis of flight behavior”).

2) Since there are data from only two bats, perhaps the authors could show the distribution of the main results across those two bats (as one would do for non-human primates which often have two subjects) to allow the readers to compare the responses.

We agree that it is important to show the separate results for the two bats in our study to allow the reader to assess the consistency of the results between animals. In the revised manuscript, we have color coded the summary data in Figures 4 and 5 (Bat 1 in green, Bat 2 in brown), and parsed the data in Figure 6 by individual bat.

3) Provide the standard measurement of unit-isolation quality by showing the distribution of Lratio and isolation distance for all of the analyzed neurons and exclude neurons which are clearly multi-unit. The concern is that biases can emerge from noisy analysis relying on very low spike counts. The authors should provide a measure of what is a reasonable threshold of number of spikes for a single neuron such that it can be safely included in the different analysis, or show that their results do not depend on spike counts.

We have now provided two different measures of unit-isolation quality in the revised manuscript. These are measures that are typically employed to measure cluster separation of tetrode data: Lratio and isolation distance (Schmitzer-Torbert, et al., 2005; Saleem, et al., 2013). In prior studies, Lratio’s less than 0.07 and isolation distances greater than 15 were used as criteria for well separated clusters. In our data, all Lratio’s were less than 0.05, and isolation distances were greater than 15. These details have been added to the manuscript (Results, third paragraph; subsection “Surgical Procedure, neural recordings and spike sorting”, third paragraph and Figure 1—figure supplement 3).

Reviewer #1:

The authors have been very responsive to my previous review. I accept their reasons for relying on their model for the timing of echo-evoked spikes.

With respect to neurons recorded, although data are still from only 2 bats, they have increased the number of neurons analyzed. The authors now report 182 single neurons recorded in the SC, with 67 being selective to pulse-echo pairs. The authors have answered questions about unit isolation with a citation to the Quiroga et al. 2004 paper, and also point out that data from both bats show similar results. More information should be provided about these units. The authors could provide data on their units to support divisions into sensory, motor, etc., including latency and rate, as well as spike sorting criteria.

In the manuscript, we provide the criteria used to classify units into sensory, sensorimotor and vocal-premotor units in the subsection "Classification of neurons into sensory, sensorimotor and vocal-premotor cells”. The mean response latencies of single sensory neurons we recorded is 5.9 ± 3.4 ms. In more detail, the minimum spike latency was 3 ms and the minimum s.d. of latency was 1 ms. The median s.d. of the response latencies for the 67 sensory neurons was 3.8 ms. Previous publications have reported a wide range of response latencies in SC neurons of the passively listening bat, as long as 40 ms, but also as short as 4 ms (Valentine and Moss, 1997), 3.6 ms (Jen et al., 1984) and 4 ms (Wong, 1984), and short latency responses are likely mediated through a direct projection from the nucleus of the central acoustic tract to the SC (Casseday et al., 1989). These results have been included in the last paragraph of the subsection “Construction of spatial response profiles”.

In Author response image 1 the spike latencies are shown on the x-axis (ms) and the spike probabilities on the y-axis for typical examples of sensory neurons (panel A), vocal premotor neurons (panel B) and sensorimotor neurons (panel C). Note the negative latencies for the premotor neural activity. The blue dashed line indicates the onset of the echo (auditory stimulus) and the red dashed line indicates the onset of the vocalization. Gaussian fits of the latency data are shown in blue and red, for sensory and motor neurons, respectively.

In the revised manuscript we have also gone into more detail in regards to our wavelet-based spike sorting method. This includes new analysis examining the Lratio and isolation distances of the spike sorting clusters. These are measures that are typically employed to measure cluster separation of tetrode data: Lratio and isolation distance (Schmitzer-Torbert, et al., 2005; Saleem, et al., 2013). In prior studies, Lratio’s less than 0.07 and isolation distances greater than 15 were used as criteria for well separated clusters. In our data, all Lratio’s were less than 0.05, and isolation distances were greater than 15. These details have been added to the manuscript (Results, third paragraph; subsection “Surgical Procedure, neural recordings and spike sorting”, third paragraph and Figure 1—figure supplement 3).

In the revision, the authors provide more data on their SSG responses. It would be helpful to find out how the 26 units shown for the SSGs were classified? The only group of neurons that I can find with n=26 are the vocal premotor group. Are these the same units?

We have revised the manuscript (subsection “Adaptive sonar behavior modulates 3D spatial receptive fields”, last paragraph) to clarify how responses to SSGs were classified and the numbers of neurons in each category. We apologize for the confusion in our previous version.

Regarding the second question, we reported that 26 vocal premotor neurons were characterized (subsection “3D spatial tuning of single neurons in the SC of free flying bats”, second paragraph); however, these were different from the 26 neurons in the SSG/gamma power analysis.

For the gamma power (SSG v/s nonSSG) analysis we analyzed all the neurons, which showed significant tuning in range (n = 56). Out of these neurons, we were only able to perform the analysis on 26 neurons (from 21 different channels), which were not affected by the bat’s wing beat artifact. These details have been added to the text in the Materials and methods subsection “Local field potential.”

The other revisions greatly add to the paper, providing data on bat flight paths, and number of sessions in which responses were recorded.

Thank you!

Reviewer #3:

While I appreciate the response from the authors I still find the information provided lacking on the three main domains I described in my original review:

1) Behavioral data analysis and presentation: Still there is no detailed description and importantly, quantification of the behavior. The authors say that bats were released from different locations and did not exhibit stereotyped behaviors but as requested, they should show this. As requested previously, can the authors provide a quantitative assessment of the reproducibility (or lack-thereof) of both the starting positions and flight trajectories. This can, for example, be done in the form of a heat-map illustration the spatial distribution (in 3D space) of the flight trajectories. But I encourage the author to provide a different assessment of this important point as well such that their argument for more variable flight trajectories and starting positions are better supported.

We appreciate this reviewer’s concern and have followed up with a quantification of the bat’s flight trajectories, which shows that the bat’s flight behavior in this study was not stereotyped. In the revised manuscript, we have provided a quantification of the trial-to-trial correlations in flight trajectory. As mentioned above, in the newly revised manuscript, we chose to quantitatively measure the amount of stereotypy in the bats’ flight behavior, following a method used by Barchi et al. (2013) and Falk et al. (2014). In this analysis, we performed 2D spatial cross correlations of occupancy histograms for every trial within a recording session. Using this technique, high correlation values indicate stereotyped flight paths from trial to trial and these have been interpreted as an indication of the bat’s use of spatial memory (Barchi et al., 2013). Conversely, low correlation numbers have been interpreted as use of active sensing rather than spatial memory (Falk et al., 2014).

The detailed methodology is as follows. Occupancy histograms were created by collapsing the 3D trajectory data to 2D plan projection (x,y and x,z). The number of points across a set of flight paths that fell inside 10 cm2 bins was counted. These points were converted to probabilities by dividing each bin count by the total number of points across each set of flights. After normalization, the occupancy histograms of trials could be compared within each session. The next step was to compute the autocorrelation of each trial and cross correlation of each trial with every other trial. The maximum value of each 2D cross correlation was divided by the maximum value of the autocorrelation. This ratio is shown as a matrix for a representative session for both bats in Figure 1—figure supplement 1. The value of each square along the diagonal is one (yellow on the color bar), as it represents the autocorrelation of each flight trajectory. Cooler colors indicate minimum correlation between flight trajectories and warmer colors indicate stereotypy between trajectories. Further, we used 8 positions (a-h) for releasing Bat 1 and 6 positions (a-f) for releasing Bat 2. These are indicated on each plot to better allow evaluation of stereotypy when the bat was released from the same release point.

In our analysis (Figure 1—figure supplement 1), we found a very low correlation in flight paths across trials, and argue that this analysis, along with the bat’s adaptive sonar behaviors, provide evidence that the bats did not rely on spatial memory to guide their flight. We believe that a correlation analysis of flight paths is more appropriate than an occupancy heatmap to demonstrate this point. This analysis has been included in the main manuscript (Results, first paragraph and subsection “Analysis of flight behaviour”). The question raised by the reviewer pertains to whether spatial memory may have been guiding the bat’s flight behavior. An occupancy heatmap only provides coverage information, and would therefore be insufficient to describe how stereotyped flights are from trial-to-trial. In our experiment, coverage is important to the analysis of spatial tuning, and for this analysis, we show coverage across azimuth, elevation, and distance in Figure 3—figure supplement 1.

Also, the information on reward contingency is lacking. How many trials in a session resulted in landing on a platform and how many did not? When the bats did not land on the platform were they also rewarded? Much more details of the behavior are required and at the moment not provided by the authors. Again, this information is important for addressing the nature of the neural responses as postulated by the authors and align their results with findings from other species (such as primates, rodents, etc…). This will allow the authors to engage a broader audience beyond the single species of bats.

In this study, we did not impose a ‘reward contingency’ upon the bat’s performance. We fed the bats mealworms at the end of each trial to keep them active, regardless of where they landed. Bat 1 landed on the platform (70% of trials) and elsewhere in the room (30% of trials), and was fed each time it landed. Bat 2’s task was simpler, and it flew around our experimental test room and avoided crashing into hanging obstacles. When Bat 2 landed on the wall, which marked the end of a trial, it was fed. We have changed the text to emphasize that the animals weren’t conditionally rewarded for different behaviors, but merely fed during the course of the experiment (Results, first paragraph and in the subsection “Experimental design”).

2) Analysis of neural data: Despite the request in the previous round of reviews there is no quantification of the quality of the neural signal. As requested, the authors should provide some assessment of the quality of their neural data in the form of isolation indexes and Lratio. Again, such measures are fairly standard in neural analysis and would allow the readers to assess the data more properly. Also, the authors have now removed the threshold on the minimal number of action potentials for a neuron to be included in their analysis. This allowed them to increase the N of "valid neurons" without a quantification of the dependence of the sensitivity of the observed tuning curves on the number of action potentials included in the analysis. Furthermore, this was not requested by the reviewers and bring about concerns regarding conclusions being made based on very low number of spikes and without support that this cannot bias the result. I encourage the authors to reinstate the threshold and importantly, provide a quantitative threshold that would assure the results are not biased by low numbers of action potentials included in the analysis.

Quantification of neural signals and clustering:

We apologize for not including these points in earlier versions of our manuscript. In the revised manuscript, we have provided information on the Lratio and isolation distance for all wavelet clustered data. The clusters are well within the ranges used in prior work (Schmitzer-Torbert, et al., 2005; Saleem, et al., 2013) for significantly separated clusters (Lratio < 0.07, isolation distance > 15). We have added this information to the third paragraph of the Results and to the third paragraph of the subsection “Surgical Procedure, neural recordings and spike sorting”, and provided a figure for the Lratio analysis (Figure 1—figure supplement 3).

Power analysis replaced the threshold criterion for SSG v/s nonSSG analysis:

As the reviewer notes, we removed the threshold criterion, which was included in the first submission. However, we would like to clarify that the increase in total number of units now included in our paper was due to the addition of more data sessions. New video analysis tools permitted flight trajectory reconstructions in some trials that were originally not analyzed, due to poor video quality. In the original submission of the paper, which reported on the analysis of 20 range-tuned neurons, 17 showed a significant effect of echolocation behavior on range tuning. In the first revision of the manuscript, in which we added more neurons, we also removed the threshold criterion, and the number of units increased to 56. In response to the reviewer’s concern about removing the threshold criterion, we have now in the second revision, adopted a more rigorous power analysis to determine which units should be included in our data set (see subsection “Adaptive sonar behavior modulates 3D spatial receptive fields” and subsection “Power analysis of sample sizes for the SSG and non-SSG spatial tuning comparisons”), and we report on an n of 53 neurons in Figure 5E and an n of 51 in Figure 5F.

For the SSG and non-SSG analysis we separated spiking activity when the bat produced SSGs and nonSSGs. This resulted in some of the groups having a low spike count. To ensure that for each comparison, and for each neuron, we had enough statistical power to reliably analyze the data, we performed a permutation test. We only included the cells, which passed the test at the p < 0.05 criterion level, which excluded 3/56 cells for Figure 5E and 5/56 cells for Figure 5F. We have included these additions in the manuscript (see the aforementioned subsections).

As a further confirmation of the analysis (rank sum test) used to check whether the neurons showed a significant shift in range tuning (Figure 5F), we used the software G*Power (Faul, F et. al; 2009) to estimate the statistical power, given the sample sizes and standard deviation of each SSG-nonSSG group, for each and every neuron. We would like to note that the power analysis for a non-parametric test (like the rank sum test) also requires an assumption of normality. We used the Lehmann technique (available as an option in G*Power) and checked our results at α < 0.05. We went a step further and identified all neurons for which both the SSG and nonSSG groups were normally distributed (we used the Anderson Darlington test as a test of normality). For these neurons, which passed the normality test, we estimated the power (using the Matlab command sampsizepwr). We would like to emphasize that both the G*Power analysis and the sampsizepwr analysis yielded similar results.

The above sets of analyses give us confidence that our results are not biased by low spike counts and are robust.

3) Novelty: The points raised by the authors are still in agreement with the fact that this paper very elegantly puts together pieces of data that have been previously reported elsewhere, either in bats or in other species and mostly serve as a verification of previous findings. Furthermore, some of the statements made by the authors are unclear. For example, when comparing to the work of Geva et al., the authors claim that the Egyptian bat does not dynamically modulate its sonar signal in response to its surrounding but the senior author of the paper is an author on a paper stating that it does (S. Danilovich, A. Krishnan, W. J. Lee, I. Borrisov, O. Eitan, G. Kosa, C. F. Moss, Y. Yovel (2015) Bats regulate biosonar based on the availability of visual information Current Biology 25(Feng et al., 1978), 1107-1125), which is puzzling. Furthermore, other studies from Yosef Yovel and Nacham Ulanovsky have demonstrated that this species does in fact dynamically change the directionality of its sonar beam in responses to the acoustic features of its environment, and the senior author of the current paper is also an author on that manuscript: Y. Yovel, B. Falk, C. F. Moss, N. Ulanovsky, (2011) Active control of acoustic field-of-view in a biosonar system, PLoS Biology, 9(Bichot et al., 2005): e1001147. (Open article).

The data reported in our manuscript goes beyond putting together pieces of data that have been previously reported elsewhere, and we expand further on the novelty of our findings in the second revision (subsection “3D allocentric versus 3D egocentric representations in the brain”). We now recognize that some key information was not clearly conveyed in our previous response to the reviewer. 1) The fact is that the Egyptian fruit bat produces tongue clicks, not laryngeal vocalizations, and therefore. it cannot modulate the spectro-temporal features of its echolocation calls in response to echoes it gathers from the environment. As the reviewer points out, the Egyptian fruit bat has been shown to modulate the angular separation of the beam axes of sonar clicks in a pair (Yovel et al., 2011 PLoS Biology), and this finding simply illustrates that the echolocation behavior of the Egyptian fruit bat is not as primitive as previously believed. However, the Egyptian fruit bat cannot exhibit the rich dynamic sonar behavior of a laryngeal echolocator. 2) Further, none of the published studies from Nachum Ulanovksy’s lab have included quantitative analyses of the echolocation signals produced by the Egyptian fruit bat in the context of hippocampal activity. Indeed, Ulanovsky and his team have yet to consider if the timing of Egyptian fruit bat echolocation signals influences hippocampal cells. Even their recent Science paper reporting on “social place cells” does not investigate modulation of hippocampal activity by the animal’s social calls (produced by the larynx). In a 2011 paper published by Ulanovsky and Moss, place cell tuning of hippocampal neurons in the crawling big brown bat was analyzed with respect to the time elapsed following sonar emissions, but the call intervals included in this analysis were far larger than those observed in free-flying bats inspecting their environment through echolocation, and quantitative analyses of the bat’s adaptive sonar behavior in this prior study were not carried out. We assert that our study of sensory-evoked neural activity in the midbrain superior colliculus of the free-flying laryngeal echolocating bat is entirely novel and shares no overlap with published work on the hippocampus of the free-flying Egyptian fruit bat.

In summary, while I am generally supportive of this important work I still feel that the authors should provide more detailed responses to the requests of the referees and frame their work better with respect to the vast knowledge on the neurophysiological properties of SC neurons across species. The latter would only benefit the authors as it will allow them to extend the impact and interest in their work to a broader audience, such as the readers of eLife.

We have made every effort to respond fully to your comments and suggestions, and we thank you for taking the time to help us improve our paper.

https://doi.org/10.7554/eLife.29053.028

Article and author information

Author details

  1. Ninad B Kothari

    Johns Hopkins University, Baltimore, United States
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing—original draft, Writing—review and editing
    Contributed equally with
    Melville J Wohlgemuth
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5543-6459
  2. Melville J Wohlgemuth

    Johns Hopkins University, Baltimore, United States
    Contribution
    Conceptualization, Data curation, Validation, Investigation, Writing—original draft, Writing—review and editing
    Contributed equally with
    Ninad B Kothari
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4779-4154
  3. Cynthia F Moss

    Johns Hopkins University, Baltimore, United States
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Project administration, Writing—review and editing
    For correspondence
    cynthia.moss@jhu.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6916-0000

Funding

National Science Foundation (IOS1460149)

  • Cynthia F Moss

Air Force Office of Scientific Research (FA9550-14-1-039)

  • Cynthia F Moss

Office of Naval Research (N00014-12-1-0339)

  • Cynthia F Moss

Office of Naval Research (N00014-17-1-2736)

  • Cynthia F Moss

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We would like to thank Drs. Nachum Ulanovsky, Yossi Yovel, Uwe Firzlaff, Lutz Wiegrebe and Shreesh Mysore for comments on the research presented in this manuscript. We also thank the members of the Johns Hopkins Comparative Neural Systems and Behavior lab (aka Bat Lab) for their valuable feedback on data analysis reported in this article, Dallas DeFord for help with data preprocessing, and James Garmon of Psychological and Brain Sciences, JHU for designing and fabricating various apparatus used during data collection, without which this experiment would not have been possible. This work was supported by the following research grants NSF IOS1460149, AFOSR FA9550-14-1-039, ONR N00014-12-1-0339 and ONR MURI N00014-17-1-2736.

Ethics

Animal experimentation: All of the animals were handled according to approved institutional animal care and use committee (IACUC) protocols of the Johns Hopkins University. The experimental protocol (BA17A107) was approved (March 16, 2017) by the IACUC of the Johns Hopkins University. All surgery was performed under isoflurane anesthesia, and every effort was made to minimize suffering.

Reviewing Editor

  1. Catherine Emily Carr, University of Maryland, United States

Publication history

  1. Received: May 28, 2017
  2. Accepted: February 27, 2018
  3. Version of Record published: April 10, 2018 (version 1)

Copyright

© 2018, Kothari et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,282
    Page views
  • 221
    Downloads
  • 5
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, Scopus, PubMed Central.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

  1. Further reading

Further reading

  1. The techniques used by bats to navigate space in three dimensions are becoming clearer.

    1. Neuroscience
    Jimmie M Gmaz et al.
    Research Article Updated