A small, computationally flexible network produces the phenotypic diversity of song recognition in crickets
Abstract
How neural networks evolved to generate the diversity of species-specific communication signals is unknown. For receivers of the signals, one hypothesis is that novel recognition phenotypes arise from parameter variation in computationally flexible feature detection networks. We test this hypothesis in crickets, where males generate and females recognize the mating songs with a species-specific pulse pattern, by investigating whether the song recognition network in the cricket brain has the computational flexibility to recognize different temporal features. Using electrophysiological recordings from the network that recognizes crucial properties of the pulse pattern on the short timescale in the cricket Gryllus bimaculatus, we built a computational model that reproduces the neuronal and behavioral tuning of that species. An analysis of the model’s parameter space reveals that the network can provide all recognition phenotypes for pulse duration and pause known in crickets and even other insects. Phenotypic diversity in the model is consistent with known preference types in crickets and other insects, and arises from computations that likely evolved to increase energy efficiency and robustness of pattern recognition. The model’s parameter to phenotype mapping is degenerate – different network parameters can create similar changes in the phenotype – which likely supports evolutionary plasticity. Our study suggests that computationally flexible networks underlie the diverse pattern recognition phenotypes, and we reveal network properties that constrain and support behavioral diversity.
Editor's evaluation
Clemens et al. present a computational model of the cricket song recognition network, which they show is capable of reasonably reproducing neural activity and song selectivity in G. bimaculatus. They then explore the parameter space of this network and find that varying parameters of model cells enable it to produce a range of selectivities for the period, pulse duration, duty cycle, or pause duration of input song. They then identify the network parameters that most affect song selectivity and investigate the relationship between several subsets of parameters and song preference. This is a fascinating exploration of the computational flexibility of a small neural circuit; it is well researched and written and was enjoyable to read.
https://doi.org/10.7554/eLife.61475.sa0Introduction
Many behaviors are driven by the recognition and evaluation of sensory stimuli. For instance, hunting requires the detection and tracking of prey; communication requires the recognition of the sounds, pheromones, or visual displays that serve as signals. The diversity of animal behavior across taxa betrays the capacity of sensory systems to evolve and adapt to reliably and specifically recognize a wide variety of signals. Neural evolution is well understood when behaviors are driven by signals recognized through the response specificity of primary afferent neurons, where the change in a single amino acid can change the tuning of a specific behavior (Prieto-Godino et al., 2017; Ramdya and Benton, 2010). However, many behaviors are driven by complex temporal and spatial signal patterns, whose recognition is based on the processing and comparison of neural activity across time and space, where changes in many parameters define the tuning of the system. For these behaviors, unraveling the underlying neural computation is challenging since it requires a mapping from circuit parameters to recognition phenotype.
One prominent behavior involving the recognition of complex temporal patterns is acoustic communication. Many animals – monkeys, mice, bats, birds, frogs, crickets, grasshoppers, katydids, fruit flies – produce species-specific songs to attract and woo conspecifics of the other sex (Baker et al., 2019; Bradbury and Vehrencamp, 2011; Kostarakos and Hedwig, 2014; Neunuebel et al., 2015; Schöneich et al., 2015). During the evolution of acoustic communication, the structure of songs as well as behavioral preferences can evolve rapidly during speciation events (Blankers et al., 2015; Mendelson and Shaw, 2005), giving rise to the large diversity of species-specific songs. Since the evolution of song is mainly driven by the female (Gray and Cade, 2000), the females’ song recognition must be selective and modifiable in order to drive the evolution of distinct, species-specific song patterns in males (Wagner, 2007). But how are these changes implemented at the level of the pattern recognition networks? While electrophysiological experiments can demonstrate the principles of their operations at a given time, they are limited in revealing the functional contribution of cellular and synaptic parameters in a network-wide systematic analysis. To overcome these limitations, the ability of biological networks to generate different recognition phenotypes can be investigated using computational modeling.
Here, we examine the computational capacity of the brain network that recognizes the pulse pattern in the Mediterranean field cricket, Gryllus bimaculatus. Cricket song consists of a sinusoidal carrier frequency, modulated in amplitude with temporal structure on short (<100 ms) and long (>100 ms) timescales (Figure 1A). On the short timescale, the song consists of trains of sound pulses with a species-specific pulse duration and pulse pause. On the long timescale, the pattern is more variable, and pulse trains are either continuous (trills) or grouped into chirps interrupted by a longer chirp pause. The pulse pattern on the short timescale – and the female’s behavioral preference for it – is compactly described in a two-dimensional parameter space spanned by pulse duration and pause (Figure 1B and C). The diverse song preferences have been extensively mapped in more than 20 species (e.g., Bailey et al., 2017; Cros and Hedwig, 2014; Gray et al., 2016; Hennig, 2003; Hennig, 2009; Hennig et al., 2016; Rothbart and Hennig, 2012). This revealed three principal types of preference, defined by selectivity for specific features of the pulse pattern (Figure 1B): pulse duration, pulse period (duration plus pause), and pulse duty cycle (duration divided by period, corresponds to signal energy) (Hennig et al., 2014). Intermediates between these types are not known. A fourth type of selectivity – for pulse pause – has not been reported in crickets and is only known from katydids (Schul, 1998).
Repetitive patterns of short pulses that are organized in groups on a longer timescale are a common feature of acoustic signaling in insects, fish, and frogs (Baker et al., 2019; Carlson and Gallant, 2013; Gerhardt and Huber, 2002), and the processing and evaluation of these pulse patterns is therefore common to song recognition systems across species. Moreover, circuits analyzing temporal patterns of amplitude modulations are likely building blocks for recognizing the more complex acoustic communication signals found in vertebrates like songbirds or mammals (Aubie et al., 2012; Coffey et al., 2019; Comins and Gentner, 2014; Gentner, 2008; Neunuebel et al., 2015) including human language (Oganian and Chang, 2019; Neophytou and Oviedo, 2020). Insights from insects where assumptions on physiologically relevant parameters like synaptic strengths, delays, and membrane properties of individual neurons can be made and systematically tested are therefore relevant for studies of pattern recognition systems and the evolution of acoustic communication systems in general.
While little is known about the neural substrates that recognize song on the long timescale of chirps, the neuronal circuit that computes the behavioral preference for the pulse pattern on the short timescale has been revealed in the cricket Gryllus bimaculatus (Kostarakos and Hedwig, 2012; Schöneich et al., 2015). In this species, the selectivity for a narrow range of pulse periods is created in a network of five neurons and six synaptic connections by combining a delay line with a coincidence detector (Figure 1D). The ascending auditory neuron 1 (AN1) is tuned to the carrier frequency of the male calling song and provides the input to a small, four-cell network in the cricket brain. Driven by AN1, the local neuron 2 (LN2) inhibits the non-spiking neuron LN5, which functions as delay line and produces a post-inhibitory rebound depolarization driven by the end of each sound pulse. The coincidence detector neuron LN3 receives direct excitatory input from AN1 and a delayed excitatory input driven by the rebound of LN5; it fires strongly only if the rebound from LN5 coincides with the onset of the AN1 response to the next syllable. Lastly, the feature detector neuron LN4 receives excitatory input from LN3 and inhibitory input from LN2, which sharpens its selectivity by further reducing responses to pulse patterns that do not produce coincident inputs to LN3. LN4’s selectivity for pulse patterns closely matches the phonotactic behavior of the females (Figure 1E).
We here asked whether the network that recognizes features of the pulse pattern on the short timescale in G. bimaculatus (Figure 1D) has the capacity to produce the diversity of recognition phenotypes for pulse duration and pause known from crickets and other insects (Figure 1B), and what circuit properties support and constrain this capacity. Based on electrophysiological recordings (Kostarakos and Hedwig, 2012; Schöneich et al., 2015), we fitted a computational model that reproduces the response dynamics and the tuning of the neurons in the network. By exploring the network properties over a wide range of physiological parameters, we show that the network of G. bimaculatus can be modified to produce all types of preference functions for pulse duration and pause known from crickets and other insect species. The phenotypic diversity generated by the network is shaped by two computations – adaptation and inhibition – that reduce responses and point to fundamental properties of neuronal networks underlying temporal pattern recognition.
Results
A computational model of the song recognition network in G. bimaculatus
We tested whether the delay line and coincidence detector network of the cricket G. bimaculatus (Figure 1D) can be modified to produce the known diversity of preference functions for pulse duration and pause in cricket calling songs (Figure 1B). This network was previously inferred from the anatomical overlap together with the dynamics and the timing of responses of individually recorded neurons to a diverse set of pulse patterns (Kostarakos and Hedwig, 2012; Schöneich et al., 2015). Given that electrophysiology is challenging in this system, dual-electrode recordings to prove the existence of the inferred connections do not exist presently. We consider the neurons in the network cell types that may also comprise multiple cells per hemisphere with highly consistent properties across individuals (Schöneich, 2020). We fitted a computational model based on intracellularly recorded responses of the network’s neurons to pulse trains. Our goal was to obtain a model that captures the computational capacity of the network without tying it to a specific biophysical implementation, and we reproduced the responses of individual neurons using a phenomenological model based on four elementary computations (Figure 2A): (1) filtering, (2) nonlinear transfer functions (nonlinearities), (3) adaptation, and (4) linear transmission with a delay. Nevertheless, all model components have straightforward biophysical correlates (see Discussion), which allows us to propose biophysical parameters that tune the network in specific implementations. The computational steps – for instance, whether a neuron had an integrating or a differentiating filter (Figure 2—figure supplement 2) – were selected such that each neuron’s response dynamics could be reproduced (Figure 2A, Figure 2—figure supplement 2, Table 1). The model parameters were first manually tuned to initial values that reproduced the key properties of each neuron’s response and then numerically optimized to more precisely fit each neuron’s response dynamics and tuning (Figure 2B–H). To simplify fitting, we exploited the feed-forward nature of the network: we first optimized the parameters of the input neuron to the network, AN1. Then, we went downstream and fitted the parameters of each neuron’s downstream partners while fixing all parameters of its upstream partners for all neurons in the network. Electrophysiological data used for fitting were the time-varying firing rate for the spiking neurons AN1, LN2, LN3, and LN4, and the membrane voltage for the non-spiking LN5 neuron, all in response to periodical pulse trains with different pulse durations and pauses (Kostarakos and Hedwig, 2012; Schöneich et al., 2015). A detailed description of the model parameters, the data used for fitting, and the fitting procedure are described in Materials and methods, Figure 2—figure supplement 1, Figure 2—figure supplement 2, and Table 1.
The model faithfully reproduces the neural responses
The fitted model closely reproduced the responses of the network neurons to stimuli from the electrophysiological data set (Figure 2B–H). To quantify model performance, we assessed the match in the dynamics and in the tuning between the neuronal and the model responses. First, we computed the squared correlation coefficient () between the recorded and the modeled responses (Figure 2G, Figure 2—figure supplement 3). We performed this correlation analysis at different timescales of the traces by low-pass filtering responses and predictions with filters of different durations. At shorter timescales, the measure is sensitive to the precise response timing, whereas at longer timescales it reflects the match in the coarse features of the firing rate or voltage dynamics. The value is high across all timescales for the model neurons (indexed with ‘M’) AN1M, LN2M, LN3M, and LN5M, which respond to a pulse in the biological network with multiple spikes or sustained membrane voltage deflections. By contrast, LN4 produces only a few and irregularly timed spikes during a chirp, and therefore is highest for timescales exceeding the duration of a typical pulse (15 ms) (Figure 2—figure supplement 3). Second, we calculated the match between the tuning curves derived from the experimental data and the model (Figure 2H). The model excellently reproduced the tuning curves of AN1M, LN2M, LN3M, and LN4M. Performance is lower for LN5M since the model produced overly strong rebound responses for patterns with long pulse periods and pauses (Figure 2B and D). In the electrophysiological data, the rebound amplitude is also variable across individuals. This may reflect interindividual variability, but it could also be an experimental artifact due to the challenges of recordings from the tiny branches of this very small neuron. Despite this variability, the tuning of responses downstream of LN5 – LN3 and LN4 – is not (Figure 2B–D). This indicates that the biological network is robust to small changes in rebound amplitude and that it primarily relies on rebound timing. This is well reproduced in our model (Figure 2E–G): the response dynamics and tuning for the downstream neurons LN3M and LN4M are well reproduced despite the discrepancy in LN5M rebound amplitude. Moreover, we find that altering rebound amplitude within the range of the discrepancy only weakly affects model output (Figure 2—figure supplement 4). We can also not exclude that there is a population of multiple LN5-type neurons in each hemisphere and that variability between the individual LN5 neurons reflected in our recordings is averaged in their summed input to LN3. Overall, this shows that despite this small discrepancy between the data and the model, our model well captures the computations of the biological network.
To further assess the model’s performance, we examined each model neuron’s responses over a wide range of pulse and pause durations that covered the range of song parameters found across cricket species (Weissman and Gray, 2019). There exist no electrophysiological data for such a wide range of stimuli, but the behavioral data from G. bimaculatus indicate that the neural responses should change smoothly with the song parameters (Grobe et al., 2012; Hennig et al., 2014; Kostarakos and Hedwig, 2012). The responses of all neurons in the model – presented as two-dimensional response fields that depict the response rate for each combination of pulse duration and pause in the set of test stimuli – are consistent with this prediction (Figure 3A). Discontinuities in the responses with a stimulus parameter stem from the discrete nature of the stimulus because the number of pulses per train changes with pulse duration and pause (Figure 3—figure supplement 1A and B). The response fields illustrate the gradual transformation of tuning in the network: LN2M at the beginning of the network responds best to stimuli with large duty cycles, that is, stimuli with long pulse durations and short pauses. Following the network from LN5M over LN3M to LN4M, the responses to large duty cycle stimuli attenuate and the pulse period tuning becomes more and more prominent, with LN4M ultimately being selective for a narrow range of pulse periods.
The song of G. bimaculatus is produced with a chirp pattern (Figure 1A) and female preference for it is broad relative to the tuning for pulse duration and pause (Grobe et al., 2012). Consistent with that, the model’s response field is robust to the small changes in the chirp duration like adding or removing a single pulse from a chirp typically found in the natural song of this species (Figure 3—figure supplement 2). This confirms that our results on pulse duration and pause are robust to the small variations on the longer timescale of chirps observed in natural song. Future studies will examine to what extent the network that recognizes song on the short timescale of pulses also contributes to female preference on the longer timescale of chirps.
In summary, our model reproduces the characteristic response features of each neuron type in the biological network. Using this model of the song recognition mechanism in G. bimaculatus, we can now test whether the network has the capacity to produce the behavioral preferences for pulse duration and pause known from other cricket species and identify the parameters that determine the network’s preference.
The network can be tuned to produce all known preferences for pulse duration and pause in crickets
To determine the diversity of behavioral preferences for pulse duration and pause that the network can produce, we created different model variants by altering all model parameters – for instance, the weight and delay of inputs or the amplitude or duration of filters. The model variants were generated by randomly picking values for each of the 45 parameters from an interval around the parameter values of the fit to G. bimaculatus (see Materials and methods for details and Table 1). Biophysical parameters of a neuron type can vary 10-fold even within a species (Goaillard et al., 2009; Schulz et al., 2007), and we therefore chose an interval of 1/10 to 10 times the values from the fit to G. bimaculatus. Initial experiments with a wider range (1/100 to 100-fold) yielded qualitatively similar results but with a larger fraction of untuned or unresponsive models. Delay parameters were randomly picked from an interval between 1 and 21 ms. Delay parameters correspond to the delay added to a neuron’s inputs and were optimized during fitting to match the timing of the responses of the neuron’s outputs. They therefore account not only for axonal transduction and synaptic transmission delays but also for delays arising from low-pass filtering or integration of inputs to the spiking threshold (Creutzig et al., 2010; Zhou et al., 2019), justifying the extended range of values. To translate responses of the output neuron of the network – LN4M – into phonotaxis, we used a simple model: the firing rate of LN4 is strongly correlated with the female phonotaxis in G. bimaculatus (Figure 1E, Schöneich et al., 2015), and we therefore took LN4M’s firing rate averaged over a chirp to predict phonotaxis from the model responses. Integrative processes over timescales exceeding the chirp are known to affect behavior in crickets and other insects (Poulet and Hedwig, 2005, see also Meckenhäuser et al., 2014; Clemens et al., 2014; DasGupta et al., 2014). We omit them here since they do not crucially affect responses for the simple, repetitive stimuli typical for pulse trains produced by crickets. The preference properties of the network models with randomized parameter sets were characterized for a two-dimensional stimulus space using pulse trains with all combinations of pulse durations and pauses between 1 and 80 ms (Figure 4A). We generated 5 million model variants, 9% of these were responsive and selective and used for all further analyses. This low proportion arises because many parameter combinations produce constant output, for instance, if the firing threshold in AN1 is too low or too high.
As a first step towards characterizing the types of tuning the network can produce, we assessed the preferred pulse duration and pause for each of the 450,000 selective model variants (Figure 4A). We find that preferences cover the full range of pulse and pause combinations tested (Figure 4B). However, the model variants do not cover the preference space uniformly but are biased to prefer patterns with short pulse durations, short periods, and low duty cycles (Figure 4C). Peaks at pauses of 0 ms arise from duty cycle-tuned models with a preference for unmodulated stimuli, and peaks at pauses of 80 ms arise from models preferring pauses beyond the range tested here. In conclusion, the network can produce diverse recognition phenotypes, but this diversity is biased towards specific stimulus patterns.
The preferred pulse parameters – duration, pause, and their combinations period and duty cycle – only incompletely describe a network’s recognition phenotype. In the next step, we focused a more exhaustive description of the response fields on aspects that have been well described in behavioral analyses. This allowed us to assess the match between the diversity of response fields in the model with the known biological diversity. Behavioral analyses in crickets and other insects (Deutsch et al., 2019; Hennig et al., 2014; Schul, 1998) typically find oriented response fields with a single peak in the two-dimensional parameter space spanned by pulse duration and pause (Figure 4A). The vast majority of these fields have an elongated major axis defining stimulus parameters the female is most tolerant for, and a shorter minor axis defining parameters the female is most selective for. Multi-peaked response fields have been associated with a resonant recognition mechanism and have so far only been reported in katydids (Bush and Schul, 2005; Webb et al., 2007), not in crickets. The orientations of response fields measured in more than 20 cricket species cluster around four angles (Figure 1B) forming four principal types of tuning. Intermediate types of phonotactic tuning may exist but have not been described yet. Specifically, duration tuning is defined as selectivity for pulse duration and tolerance for pause (Figure 1C, lilac) (Teleogryllus commodus, Gray et al., 2016; Hennig, 2003, see also Deutsch et al., 2019). This corresponds to the response field’s major axis being parallel to the pause axis (defined as an orientation of 0°, see inset in Figure 4D). By contrast, pause tuning (Figure 1C, green) corresponds to an orientation of 90°, with the response field’s major axis extending parallel to the pulse duration axis. This type of tuning is not known in crickets, only in katydids (Schul, 1998). Pulse period and duty cycle tuning correspond to response fields with diagonal and anti-diagonal orientations, respectively. Period tuning (Figure 1C, purple) is given by an anti-diagonal orientation ( = –45°), indicating selectivity for pulse period and tolerance for duty cycle (G. bimaculatus, Hennig, 2003; Hennig, 2009; Rothbart and Hennig, 2012). Last, duty cycle tuning (Figure 1C, cyan) is given by diagonal alignment ( = 45°) and selectivity for duty cycle but tolerance for period (Gryllus lineaticeps, Hennig et al., 2016).
We first examined to what extent the model produced the single-peaked, asymmetrical response fields typical for crickets. We find that most response fields (80%) produced by the selective model variants were well described by a single ellipse (Figure 4—figure supplement 2A and B, see Materials and methods for details). Of these, 83% were asymmetrical (major axis >1.25× longer than minor axis), 17% were symmetrical (Figure 4—figure supplement 2C). 12% of all models produce multi-peaked response fields (Figure 4—figure supplement 2D), which are only known from katydids (Webb et al., 2007). The remaining 8% of the response fields were not well described by ellipses and/or did not have multiple distinct peaks. Thus, while the model produces more diverse responses – including complex, multi-peaked ones – most responses do match those typical for crickets.
We next assessed the orientation of the single-peaked, asymmetrical response fields to test to what extent they fall into four principal types (Figure 4D). We find response fields with any orientation, again demonstrating that the network can produce more diverse response fields than has been reported in crickets. However, the orientations are unevenly distributed and are enriched for the principal types known from crickets: 36% of the response fields have an orientation of 0 ± 10°, which corresponds to duration tuning (expectation from uniform distribution: 20°/360° = 5.6%). Duty cycle tuning (45 ± 10°) and period tuning (–45 ± 10°) are also enriched, with 17% and 12%, respectively. Notably, pause tuning (90 ± 10°) is not known in crickets and is the only principal type that is rarer than expected from a uniform distribution of orientations (2.0% vs. 5.6% expected). The rarity of pause tuning is consistent with the bias to prefer short pulse durations observed above (Figure 4C) since orientations around 90° require response fields that extend parallel to the duration axis. Note that these trends do not depend critically on the ranges of angles chosen for specifying the different response types.
Overall, the response fields generated by the model are roughly consistent with the behavioral diversity in crickets: most response fields form a single, elongated ellipse, similar to the behaviorally measured response fields. Duration, duty cycle, and period tuning are frequent in the models and in crickets, pause tuning is rare in the models and absent in crickets. Interestingly, the network tends to create a larger diversity of response fields than is known from crickets, for instance, fields that are symmetrical, multi-peaked, or have intermediate orientations. This suggests that biases in the network – like the rarity of pause tuning – constrain the distribution of preferences that evolution can select from, and that additional factors – like robustness to noise or temperature – then determine the ultimate distribution of phenotypes. Our analysis of different model variants suggests that this song recognition network can produce all known preference types for pulse duration and pause over the range of stimulus parameters relevant for crickets. This phenotypic flexibility implies that the network may form the basis for the diversity of song recognition phenotypes. We therefore sought to identify model parameters that support that diversity, that is, parameters that change the preference for pulse period or that switch the preference from one type to another. We also looked for parameters that constrain the diversity, for instance, parameters that induce a bias towards low duty cycles (Figure 4B–D).
Post-inhibitory rebound properties and coincidence timing are key parameters that shape preferences
To determine key parameters that control the network tuning and to identify the computational steps that induce the preference bias, we systematically examined the effect of changing individual model parameters on the response fields. We swept each parameter individually in 21 log-spaced steps over an interval of 1/100 to 100-fold around the value from the original fit (Figure 4F). We then calculated a sensitivity score for each parameter as the average change in the response field of the network’s output neuron, LN4M, over the parameter sweep (see Materials and methods). Parameters that when changed produced mostly unselective or unresponsive models were excluded from subsequent analyses, as were parameters that only induced one or two sudden changes in the response fields. For instance, parameters that control the firing threshold of AN1 were excluded because they turn the input to the network on or off – this produces a large, step-like change in the response field and many unresponsive models. Our sensitivity analysis thereby focuses on parameters suitable for controlling the network’s tuning, that is, whose change induces smooth shifts in the model responses while retaining responsiveness and selectivity.
The topology of the pattern recognition network is defined by five neurons (Figure 1D). As a first step, we sought to evaluate the importance of each neuron for controlling the network tuning by averaging the sensitivity scores for the parameters of each neuron (Figure 4G). This revealed that the network tuning can be least controlled through the parameters of AN1M, the input neuron of the network, and best controlled through the parameters of the non-spiking neuron LN5M, which generates the delayed post-inhibitory rebound. AN1M is unsuitable for control because changes in most parameters of AN1M will not induce a gradual change in tuning but will quickly produce no or unselective responses in the network. The importance of LN5M is consistent with the idea that the rebound and coincidence detection form the core computations of the network (Schöneich et al., 2015) since the dynamics of the rebound response directly influence what stimulus features result in simultaneous inputs to the coincidence-detector neuron LN3M.
To identify important parameters for tuning the network, we ranked them by their sensitivity score (Figure 4H). In line with the above analysis for each neuron, the top-ranked parameters directly affect the timing and the amplitude of inputs to LN3M. Among these are the delays and gains for the connections upstream of LN3M (AN1M→LN3M, LN2M→LN5M, LN5M→LN3M) but also the gain of the excitatory lobe of the LN2 filter (see Figure 2—figure supplement 1, Figure 2—figure supplement 2). Another group of important parameters affects the dynamics and amplitude of the rebound response in LN5M: first, the duration of the ‘postsynaptic’ filter of LN5M (Figure 2—figure supplement 2), which is required to reproduce the adapting and saturating dynamics of the inputs to LN5M, visible as negative voltage components in recordings of LN5 (Schöneich et al., 2015). Second, the gain of the inhibitory and the duration of the excitatory lobe of the rebound filter that produces the post-inhibitory rebound (Figure 2—figure supplement 1C, Figure 2—figure supplement 2). A modified sensitivity analysis, in which we changed combinations of two parameters at a time, produced a similar parameter ranking, confirming the robustness of these results (Figure 4—figure supplement 1).
Our sensitivity analysis revealed key parameters that change the tuning of the network, but did not address their specific effect, for instance, on the preference for specific pulse durations or periods (Figure 4E). This is however crucial for understanding which network parameters need to be modified to produce a specific phenotype and where the bias for stimuli with low duty cycles (short pulses and long pauses) arises (Figure 4B–D). We therefore examined the specific effects of some of the top-ranked parameters (Figure 4H) on the tuning of the network.
Relative timing of inputs to the coincidence detector controls pulse period preference
Four of the important parameters identified in our sensitivity analysis affect the timing of inputs to the coincidence-detector neuron LN3. Three of these parameters are the delays of the AN1M→LN3M, the LN2M→LN5M, and the LN5M→LN3M connections. The fourth parameter – the duration of the filter that shapes input adaptation in LN5M – also affects the input delays to LN3M (Figure 5—figure supplement 1). The delay between the spikes from AN1M and the rebound from LN5M in LN3M strongly affects network tuning since it determines which pulse train parameters produce coincident inputs required for driving spikes in LN3M (Figure 5A). Increasing this delay – for instance by delaying the rebound from LN5M – increases the preferred pulse period in LN3M (Figure 5B). This delay was hypothesized to be the core parameter that tunes G. bimaculatus to a pulse period of 30–40 ms (Schöneich et al., 2015), and our sensitivity analysis identifies this parameter as crucial for shaping the network’s tuning.
Interestingly, changing the rebound delay has differential effects in LN3M and in the output neuron of the network, LN4M. In LN3M, increasing the rebound delay changes both duration and pause preference and increases the preferred pulse period without changing the duty cycle preference (Figure 5C and D). However, in LN4M, a longer rebound delay only affects pause preference, but not duration preference, and thereby reduces the preferred duty cycle from 0.50 to 0.25 (Figure 5D, Figure 5—figure supplement 2). This reduction of the preferred duty cycle is a correlate of the low duty cycle bias observed in the network (Figure 4C). We therefore investigated the origin of this effect more closely.
LN4M receives excitatory input from LN3M and inhibitory input from LN2M and applies a threshold to the summed inputs (see Figure 2A, Figure 2—figure supplement 2). To determine which computation in LN4M reduces the preferred duty cycle, we removed the inhibition from LN2M and the threshold in LN4M’s output nonlinearity. While the threshold has only minor effects on tuning, removing the inhibition is sufficient to restore the preference for intermediate duty cycles in LN4M (Figure 5B–D, blue). This implies that the inhibition from LN2M suppresses the responses to high duty cycles in LN4M. We find that changes in the strength and in the timing of excitatory inputs from LN3M to LN4M contribute to this suppression (Figure 5E and F). First, the excitatory inputs from LN3M weaken with increasing duty cycle, leading to a relatively stronger impact of the inhibition from LN2M on the responses of LN4M (Figure 5F). Second, the excitatory inputs from LN3M arrive later with increasing duty cycle, resulting in a more complete overlap with the inhibition from LN2M and therefore to a more effective suppression of LN4M spiking responses (Figure 5E and F).
These results demonstrate that response suppression by inhibition in LN4M becomes more effective at high duty cycles and is one source of the network’s bias towards low duty cycle preferences (Figure 4C). We reasoned that other sources of response suppression, like inhibition or response adaptation elsewhere in the network, could further contribute to this bias.
Mechanisms of response suppression control duty cycle preference
Four additional computational steps in the network could contribute to the bias against high duty cycles (Figure 5G): first, the broad inhibitory lobe of the filter in AN1M (Figure 2A, Figure 2—figure supplement 1, Figure 2—figure supplement 2) reduces responses to subsequent pulses in a train (Figure 5G, green, Figure 2—figure supplement 1A and B) because its effect accumulates over multiple pulses. Importantly, this suppression grows with the integral of the stimulus over the duration of the filter lobe and hence with the pulse duration and duty cycle. In AN1M, this leads to shorter pulse responses due to thresholding and saturation by the output nonlinearity (Figure 2—figure supplement 1E, Figure 2—figure supplement 2). Second and third, the adaptation in AN1M and LN3M accumulates during and across pulses and reduces these neuron’s responses (Figure 5G, teal and red). This effect is again most prominent for pulse patterns with high duty cycles (long pulses, short pauses) since adaptation will be strongest during long pulses and recovery prevented during short pauses. Last, as discussed above, the inhibition from LN2M also suppresses responses in LN4M most strongly for stimuli with high duty cycle (Figure 5G, purple).
To examine how the different sources of suppression shape the model’s tuning, we removed one or more of these computational steps: we set the inhibitory lobe of the AN1M filter to zero, we removed the adaptation from AN1M and LN3M, and we removed the inhibition forwarded from LN2M to LN4M (Figure 5G). To accentuate the effects of these manipulations, we increased the delay of the LN5M→LN3M connection, which led to a preference for longer pulse periods (50 ms) and for short duty cycles (0.25) in LN4M when all sources of suppression were present (Figure 5H, left). Consistent with the prediction that different sources of suppression in the network reduce responses for stimuli with high duty cycles, the network’s preferred duty cycle tended to increase when suppression was removed (Figure 5H and I, Figure 5—figure supplement 1). Removing some sources of suppression tended to induce more sustained responses during a pulse train (Figure 5G) and to increase the preferred duty cycle from 0.25 to 0.50 (Figure 5H and I). Removing all four sources of suppression abolished period tuning and produced a preference for constant tone stimuli with a duty cycle of 1.0 (Figure 5H, right). Different sources of suppression sometimes interacted in unexpected ways. For instance, removing the inhibitory lobe and adaptation in AN1M decreased rather than increased duty cycle preference because AN1M produced stronger responses when adaptation was absent, which in turn induced stronger adaptation downstream in the network.
Overall, these results identify mechanisms of response suppression by adaptation and inhibition as a cause for the network preferring small duty cycles (short pulses and long pauses). They demonstrate how specific implementation details of a recognition mechanism constrain phenotypic diversity, but also reveal how different model parameters can be used to create phenotypic diversity, like changing the preferred duty cycle (Figure 5D,H, and I). As a last step in our analysis, we examined a parameter that switches the preference type from period to duration tuning via changes in the rebound dynamics.
Changes in rebound dynamics can switch the preference type by engaging suppression
Switches in preference type occur regularly even among closely related cricket species (Bailey et al., 2017; Hennig, 2003; Hennig et al., 2016), and we therefore looked for a model parameter that induced such a switch. We found that increasing the gain of the inhibitory lobe of the rebound filter in LN5M (‘inhibitory lobe’ in short) (Figure 6A, blue) switched the preference from period tuning (Figure 6B, bottom) to duration tuning (Figure 6B, top), characterized by high tolerance for pause and high selectivity for duration (Figure 1B). Increasing the inhibitory lobe parameter amplifies but also prolongs the rebound (Figure 6A, blue), and we examined which of these two changes creates the switch from period to duration tuning. Other model parameters – like the gain of the input from LN2M to LN5M or from LN5M to LN3M (Figure 6A and B, ‘input gain,’ red) or the LN5M output gain – only amplify the rebound and retain period tuning. This indicates that the amplification of the rebound is insufficient, and that the prolongation of the rebound is necessary to cause the preference switch. In the original model, rebounds are short because the inhibition of LN5M triggered by LN2M activity always cuts off and suppresses rebound responses (Figure 6A, black, see also Figure 2E and F), and this happens even when the rebound is amplified (Figure 6A, red). By contrast, in the model with an increased inhibitory lobe of the LN5M rebound filter, the rebound persists during the LN2M inhibition (Figure 6A, blue). The prolonged rebound drives stronger adaptation downstream in LN3M (Figure 6—figure supplement 1), in particular for pulse patterns with short pauses, because shorter pauses prevent recovery from adaptation in LN3M. This response suppression for short pauses abolishes the preference for intermediate pause durations necessary for period tuning and switches the preference type to duration tuning (Figure 6C). This last analysis highlights the dual role of suppression in shaping the recognition phenotype of the network: suppression constrains phenotypic diversity by reducing responses to patterns with long duty cycles (Figure 4C and D, Figure 5H and I), but it also contributes to phenotypic diversity by adjusting the network’s preferred duty cycle (Figure 5H, I) or by switching the preference type (Figure 6).
Discussion
How diversity in intraspecific communications systems is shaped by neural networks in the sender and in the receiver is an open question. Here, we asked whether the song recognition network in G. bimaculatus (Schöneich et al., 2015) has the potential to generate the diversity of song recognition phenotypes known from crickets. In particular, we tested whether the delay line and coincidence-detector network in G. bimaculatus can be considered a ‘mother network’ for recognizing the species-specific song pattern in different cricket species (Figure 1; Hennig et al., 2014). A model of the neuronal network reproduced the neurophysiological and behavioral data using simple, elementary computations like filtering, nonlinear transfer functions (nonlinearities), adaptation, and linear transmission with delays (Figures 2 and 3). Examining the model’s responses over a wide range of parameter values revealed that the network can generate all types of song preferences known from different crickets and even other insects (Figure 4A–E). We then identified key parameters that either support or constrain the phenotypic diversity the network can produce, providing insight into how the network can evolve to become selective for different song parameters (Figures 4F-H—6).
The delay line and coincidence-detector network can produce the full diversity of preferences for pulse duration and pause in crickets
Four principal preference types have been identified in crickets and other insects (Figure 1B): preference for pulse duration (Deutsch et al., 2019; Gray et al., 2016; Hennig, 2003), pulse pause (Schul, 1998), pulse period (Hennig, 2003; Hennig, 2009; Rothbart and Hennig, 2012; Rothbart et al., 2012), and duty cycle (Hennig et al., 2016). Variants of our network model produce all four of these preference types for the range of song parameter values relevant for crickets (Figure 4A–E).
While the network model analyzed here is derived from recordings in one species (G. bimaculatus), the delay line and coincidence-detector network is likely shared within the closely related cricket species. The phylogenetic position of G. bimaculatus close to the base of the phylogenetic tree from which many other species emerged is consistent with this idea (Gray and Cade, 2000). Our finding that this network can produce all known preferences for pulse and pause supports this idea and suggests that it forms a common substrate – a ’mother network‘ – for the diversity of song recognition phenotypes in crickets. How can the ‘mother network’ hypothesis be tested? Behavioral tests can provide insight into whether other species use the coincidence-detection algorithm found in G. bimaculatus (Hedwig and Sarmiento-Ponce, 2017). These experiments can, for instance, test the prediction that the duration of the last pulse in a chirp only weakly impacts network responses. Species that violate this prediction are unlikely to recognize song by the same coincidence mechanism. However, the ‘mother network’ hypothesis does not imply that all crickets implement a coincidence-detection algorithm, just that they reuse the same neurons with largely conserved response properties. In fact, our analyses have shown that coincidence detection can be circumvented through changes in key parameters to produce a different preference type (Figure 6, Figure 4—figure supplement 1). That is why further electrophysiological experiments in G. bimaculatus are crucial to reveal the precise biophysical mechanisms that tune the network and ultimately link changes in gene expression, for instance, of specific ion channels, to changes in network tuning. Importantly, these experiments need to be extended to other species by identifying and characterizing homologues of the neurons in the network. Recordings in other species are challenging but feasible since homologous neurons are expected to be found in similar locations in the brain. Our model produces testable predictions based on the known behavioral tuning for how key properties of these neurons may look like in any given species (see below).
Future studies will also show whether the network can explain more complex preference functions known from some crickets and also other insects. For instance, preference types that betray resonant cellular or network properties are known from katydids (Bush and Schul, 2005; Webb et al., 2007), and we find that the network can produce these multi-peaked response fields (Figure 4—figure supplement 2D), but it remains to be seen whether similar preference types exist in crickets. Several species of crickets produce complex songs that are composed of multiple types of pulse trains, and it is unclear whether the current network can reproduce the known behavioral preference for such complex songs (Bailey et al., 2017; Cros and Hedwig, 2014; Hennig and Weber, 1997).
In addition, we have not yet explored the network’s ability to reproduce the behavioral selectivity for parameters on the longer timescale of chirps (Figure 1A; Grobe et al., 2012; Blankers et al., 2016; Hennig et al., 2016). It is likely that the network can explain some properties of the selectivity for chirp known from crickets. For instance, that a minimum of two pulses is required to produce coincidence in the network could at least partly explain the existence of a minimal chirp duration for G. bimaculatus (Grobe et al., 2012). Likewise, suppression in the network reduces responses to long chirps, which could explain the reduced behavioral preference for long chirps. However, the current electrophysiological data do not sufficiently constrain responses at these long timescales and studies are needed to address this issue more comprehensively.
Lastly, our model does not address the behaviourally well-documented inter-individual variability in phonotaxis behaviors (Grobe et al., 2012; Meckenhäuser et al., 2014), which likely arises at multiple levels: At the level of song pattern recognition (inter-individual differences in the network parameters) at the level of phonotaxis behavior (biases and noise in localizing the sound) at the motivational level (low or high motivation leads to more or less selective responses) and at the motor level (variability from motor noise). Identifying the contribution of these different levels is challenging since the full characterization of the behavioral phenotype in terms of the response fields cannot be obtained reliably at the individual level - the stimulus space is too large. Therefore our model of song pattern does not explicitly consider the inder-individual variability but is meant to represent he behavior of an average female.
How to tune a pulse pattern detector?
Our sensitivity analysis of the model identified three classes of parameters that define the model’s tuning (Figure 4H). First, parameters that control the relative timing of inputs to the coincidence-detector LN3M set the network’s preferred pulse period. These include input delays in all upstream neurons (Figure 5A and B) but also passive and active membrane properties that delay the rebound responses in LN5M (Figure 5—figure supplement 1). Second, parameters that lead to a stronger and more sustained rebound in LN5M can shift the preference from pulse period to pulse duration tuning (Figure 6). Lastly, sources of response suppression, like inhibition or adaptation, reduce responses to long pulses and high duty cycles (Figure 5G–I). These three classes of parameters account for changes within and transitions across the principal types of song preference in crickets. The model thus provides testable hypotheses for how response properties in the neuronal network may have evolved to compute the preference functions of different species. For instance, species that prefer different pulse periods than G. bimaculatus like Teleogryllus leo (Rothbart and Hennig, 2012), Gryllus locorojo (Rothbart et al., 2012), Gryllus firmus (Gray et al., 2016), or Teleogryllus oceanicus (Hennig, 2003) could differ in the delays of inputs to LN3 (Figure 5B). Duty cycle-tuned species like G. lineaticeps or G15 (Hennig et al., 2016) may exhibit weaker suppression throughout the network, for instance, reduced adaptation in LN3 (Figure 5G–I). Lastly, species with duration tuning such as T. commodus (Hennig, 2003) or G13 (Gray et al., 2016) could exhibit longer and stronger rebound responses in LN5 (Figure 6).
How can these changes be implemented in a biological network? Although our phenomenological model is independent of a specific biophysical implementation, all model components have straightforward biophysical correlates. We can therefore propose biophysical parameters that tune specific aspects of the network in a given implementation. To illustrate this point, we will briefly provide examples of how the four elementary computations of the model – filtering, adaptation, nonlinear transfer functions (nonlinearities), and linear transmission with a delay – can be implemented and tuned. First, filters are shaped by active and passive properties of the membrane: individual filter lobes act as low-pass filters that dampen responses to fast inputs and arise from integrating properties of the passive membrane like capacitive currents (Dewell and Gabbiani, 2019; Azevedo and Wilson, 2017). Increasing the membrane capacitance therefore leads to stronger low-pass filtering. An added negative (inhibitory) lobe makes the stimulus transformation differentiating and can arise from conductances that hyperpolarize the membrane, like potassium or chloride channels (Nagel and Wilson, 2011; Slee et al., 2005; Lundstrom et al., 2008). Increasing the potassium conductance in Drosophila olfactory receptor neurons makes their responses more differentiating (Nagel and Wilson, 2011), while reducing the conductance of delayed-rectifier potassium channels in the auditory brainstem makes responses less differentiating (Slee et al., 2005; Lundstrom et al., 2008). The filter in LN5M that produces the post-inhibitory rebound arises from hyperpolarization-activated cation currents like Ih (mediated by HCN non-selective cation channels) and It (mediated by T-type calcium channels), which control the PIR’s amplitude and latency, respectively (Pape, 1996; Engbers et al., 2011). Second, adaptation is implemented in the model either via inhibitory filter lobes or via divisive normalization. Biophysically, adaptation can arise from synaptic depression (Tsodyks et al., 1998; Fortune and Rose, 2001), or from subthreshold or spike-frequency adaptation (Farkhooi et al., 2013; Nagel and Wilson, 2011; Benda and Herz, 2003; Benda and Hennig, 2008). Spike-frequency adaptation can arise from inactivating sodium currents, voltage-gated potassium currents (M-type currents), or calcium-gated potassium currents (AHP currents) (Slee et al., 2005; Heidenreich et al., 2011). Increasing the expression level of these channels controls the strength of adaptation, while the kinetics specific to each channel type control adaptation speed. AHP currents can last seconds if spiking leads to a long-lasting increase in the intracellular calcium concentration, giving rise to long inhibitory filter lobes or adaptation time constants in the model (Cordoba-Rodriguez et al., 1999). Third, nonlinearities translate the integrated synaptic input to firing output. The nonlinearity’s threshold is governed by the density of sodium channels at the spike-initiating zone while the steepness and saturation of the nonlinearity depend on the inactivation kinetics of sodium channels or the spiking dynamics controlled by the Na/K ratio (Prescott et al., 2008; Lundstrom et al., 2008). Lastly, transmission delays arise from axonal conduction and synaptic delays but also other mechanisms, like low-pass filtering of the membrane voltage at the pre- and postsynapse (Creutzig et al., 2010; Zhou et al., 2019), latencies arising from integration of inputs to the spiking threshold, or from spike generation (Izhikevich, 2004; Figure 5—figure supplement 1). Synaptic weights are set by the number of synaptic boutons between two neurons (Peter’s rule, see Rees et al., 2017) and the amount of neurotransmitter that can be released at the presynapse (vesicle number and loading) and absorbed at the postsynapse (number of transmitter receptors). These examples demonstrate that our phenomenological model has a straightforward, physiologically plausible implementation and can propose experimentally testable hypotheses for transitions between types of behavioral preferences.
The three computations that define model tuning – response suppression, post-inhibitory rebounds, and coincidence detection – occur across species and modalities. Delay lines are prominent in binaural spatial processing (Schnupp and Carr, 2009) but have also been implicated in visual motion detection (Borst and Helmstaedter, 2015) or pulse duration selectivity in vertebrate auditory systems (Aubie et al., 2012; Buonomano, 2000). Suppression is known to act as a high-pass filter for pulse repetition rates (Baker and Carlson, 2014; Benda and Herz, 2003; Fortune and Rose, 2001) that in our case biases the network towards responding to rapidly changing patterns, like those with short pulses (Figure 4C). Finally post-inhibitory rebounds have been implicated in temporal processing in different species like honeybees (Ai et al., 2018), fish (Large and Crawford, 2002), frogs (Rose et al., 2015), or mammals (Felix et al., 2011; Kopp-Scheinpflug et al., 2018). The computations found to control the song preference in G. bimaculatus could therefore also govern pattern recognition in other acoustically communicating animals. For instance, different bird species produce songs with silent gaps of species-specific durations between syllables and auditory neurons in the bird’s brain are sensitive to these gaps (Araki et al., 2016). Our analyses suggest that the gap preference can be shifted from longer gaps (low duty cycle: Java sparrow, Bengalese finch) to shorter gaps (high duty cycle, starling, zebra finch) by reducing suppression or adaptation in the network. This could be implemented, for example, by reducing postsynaptic GABA receptors or by lowering the expression levels of voltage-gated potassium channels.
We here focused on modifying the magnitude of parameters, corresponding, for instance, to the expression levels of neurotransmitters or ion channels. Neuronal networks, however, can also evolve to produce novel phenotypes by changing their topology, through a recruitment of novel neurons, a gain or loss of synapses, or switches in synapse valence from excitatory to inhibitory as has been shown in motor networks in Caenorhabditis elegans (Hong et al., 2019) and snails (Katz, 2011). In addition, we have not considered neuromodulators, which can rapidly alter network tuning (Bargmann, 2012; Marder, 2012; Marder et al., 2014), and which likely play a functional role in the phonotactic response (Poulet and Hedwig, 2005).
Algorithmic details specify constraints
Previous studies revealed that Gabor filters can produce the full diversity of song preference functions found in insects (Clemens and Hennig, 2013; Clemens and Ronacher, 2013; Hennig et al., 2014). However, the computation giving rise to Gabor filters can be implemented with multiple algorithms, each subject to specific constraints. For instance, the period tuning found in G. bimaculatus can be produced by the now known delay line and coincidence detection mechanism (Schöneich et al., 2015), but also by the interplay between precisely timed excitation and inhibition (Aubie et al., 2012; Rau et al., 2015), by cell-intrinsic properties like resonant conductances (Azevedo and Wilson, 2017; Rau et al., 2015), or by a combination of synaptic depression and facilitation (Fortune and Rose, 2001). By considering the implementation of the pattern recognition algorithm in a particular species, we revealed a bias in the diversity of phenotypes that this specific implementation can produce: several sources of suppression induce a bias towards preference for low duty cycle stimuli (Figures 4B–D–5G–I). This highlights the importance of studying nervous system function and evolution beyond the computational level at the level of algorithms and implementations (Marr, 1982).
Functional tradeoffs limit behavioral diversity
The low duty cycle bias present in the recognition mechanism of G. bimaculatus has several implications for the evolution of song preference in crickets and elsewhere: perceptual biases that have evolved in contexts like food or predator detection are known to shape sexual selection (Guilford and Dawkins, 1993; Ter Hofstede et al., 2015; Phelps and Ryan, 1998; Ryan and Cummings, 2013). In the case of song recognition in crickets, suppression (adaptation, inhibition, onset accentuation, Figure 5G) reduces neuronal responses to long-lasting tones and likely evolved to save metabolic energy (Niven, 2016) or to make song recognition more robust to changes in overall song intensity (Benda and Hennig, 2008; Hildebrandt et al., 2011; Schöneich et al., 2015). As a side effect, adaptation now biases the song recognition mechanism towards preferring pulse trains with low duty cycles (Figure 5H and I), which is consistent with the apparent absence of pause tuning in crickets (Hennig et al., 2014). Interestingly, pause tuning is known from katydids (Schul, 1998), suggesting that their song recognition system is not subject to the low duty cycle bias. Katydids may have avoided the low bias either by using a delay line and coincidence detection network like that found in G. bimaculatus but with weaker suppression (Figure 5G–I) or by using a different network design that is subject to different constraints (Bush and Schul, 2005). Thus, computations that increase energy efficiency and robustness can constrain the phenotypic diversity of a whole species group.
From evolutionary pattern to process
How can a diversity of neural networks evolve to drive the diversification of species-specific communication signals? Our modeling study of the song recognition network in the cricket brain provides first evidence that the underlying neuronal network is computationally flexible: by adapting physiological parameters, the network can produce all preference types described in crickets (Figure 4B–E). The computational flexibility of the recognition mechanism may explain the species richness as well as the speed of evolution in a particular taxon like crickets (Alexander, 1962; Blankers et al., 2015; Desutter-Grandcolas and Robillard, 2003; Oh and Shaw, 2013; Otte, 1992): female preferences drift around with little constraint in signal space, maybe pushed by abiotic (environmental noise selects against preferences for very short pauses) and biotic factors (avoid overlap with heterospecifics, Amezquita et al., 2011). The male song evolution follows changes in the female’s preference since only males that sing attractive song will reproduce. In this scenario, a female network that has the capacity to produce many different preference types supports the divergence of the communication system. However, this co-evolution of song preference and song structure requires male song production networks to be as flexible as the female song recognition networks (Jacob and Hedwig, 2020; Schöneich, 2020). There is also accumulating evidence for genetic coupling between the networks that produce and recognize the song pattern, which may ensure that sender and receiver stay tuned during evolution of song pattern (Xu and Shaw, 2019; Schöneich, 2020; Xu and Shaw, 2021).
Computational flexibility also needs evolvability – the ability to generate specific and robust phenotypes during evolution (Blankers et al., 2015). The ‘mechanistic degeneracy’ of computation in biological neural networks – the fact that specific network outputs can be obtained by tuning disparate parameters – is thought to be a prerequisite for evolvability (Wagner, 2011, see also Hasson et al., 2020; Leonardo, 2005; Schneider et al., 2021). The degeneracy concept has been demonstrated extensively for motor networks (Gutierrez et al., 2013; Prinz et al., 2004), and here we also found evidence of degeneracy in the cricket song recognition network. For instance, the preferred pulse period (Figure 5A and B, Figure 5—figure supplement 1) or the pulse duty cycle (Figure 5G–I) can be set by multiple parameters. The hypothesis that high-dimensional and degenerate biological parameter spaces are more evolvable could be tested by assessing the computational flexibility of a minimal model of the network that produces the observed behavior with as few parameters as possible.
If a neural network is flexible, how can it maintain a robust phenotype within a species? We find that many parameters have only weak impacts on the network’s preference, leading to a relatively large fitness-neutral parameter set and a robust phenotype (Prinz et al., 2004). For instance, we observed that the rebound amplitude in LN5 tends to be variable across electrophysiological recordings in different individuals but that the network model’s output is relatively robust to these changes (Figure 2B, Figure 2—figure supplement 4). Other parameters crucially affect the recognition phenotype and support evolvability. These fitness-critical parameters likely drive changes in the recognition phenotype during speciation events to ensure species discrimination (Amezquita et al., 2011; Schmidt et al., 2011). Future studies will employ novel computational methods for characterizing the high-dimensional parameter space (Bittner et al., 2021; Gonçalves et al., 2020) to more comprehensively characterize the fitness-neutral parameter set leading to interindividual variability and to provide insight into the robustness and evolvability of pattern recognition networks.
Materials and methods
Electrophysiological data
Request a detailed protocolThe data used for fitting the model come from intracellular recordings with sharp microelectrodes of AN1, LN2, LN3, LN4, and LN5 and are published in Kostarakos and Hedwig, 2012 and Schöneich et al., 2015. They include (1) 12 stimuli with a pulse duty cycle of ~0.5 and periods ranging from 10 to 98 ms (Figure 2B), (2) 10 stimuli with a pulse period of 40 ms and duty cycles ranging between 0.1 and 1.0 (Figure 2C), and (3) 12 stimuli with a pulse duration of 20 ms and pauses varying between 6 and 90 ms (Figure 2D). During the electrophysiological experiments, each pulse train was presented interleaved by a chirp pause of 230 ms.
Quantification of tuning in the recordings and the model responses
Request a detailed protocolIn the recordings, spikes were detected using custom routines that found peaks in the voltage traces robust to changes in baseline. The accuracy of spike detection was checked by visual inspection of the voltage traces. From the spike times, average firing rates were calculated by dividing the number of spikes produced during each chirp (pulse train and chirp pause) by the chirp period (Figure 2B–D). For the non-spiking LN5, the response corresponds to the voltage of the rebound response. Voltage rates were obtained by first calculating a baseline voltage as the average voltage in the 25 ms preceding a given pulse train, and then integrating the supra-threshold components of the voltage. This integral voltage was then divided by the chirp period to get a rate. Note that the tuning curves for LN5 are not very accurate because the rebound responses have a low amplitude and the baseline voltage fluctuated considerably during recordings of LN5.
For the model, tuning curves and responses fields were calculated as for the recordings – by integrating the firing rate outputs of the spiking neurons in the model or the rebound voltage for the non-spiking LN5 and dividing the resulting values by the chirp period.
Model inputs and simulation
Request a detailed protocolWe built a rate-based, phenomenological model of the song recognition network in G. bimaculatus. Sound inputs were presented as the time-varying amplitude of pulse trains with the specified pulse and chirp structure. Model responses were simulated with a temporal resolution of 1 kHz and tested as if pulse trains and chirp pauses were repeated endlessly.
Elementary computations
The model was built from four elementary neural computations. In the following, we will first define the elementary computations and then describe how they were combined to model each neuron in the network. Neurons in the model are referred to with a subscript M.
Filtering
Request a detailed protocolFiltering was implemented via , where and are the stimulus and response at time , respectively, and is a temporal delay. The causal filter was constructed from discrete Gaussian and exponential kernels. A discrete Gaussian kernel was defined as with support , where , and width such that . Exponential kernels were defined as , with support , where , and decay . Note that throughout the article we use ‘filter duration’ as a non-technical synonym for ‘support,’ which specifies the time over which the filter is defined. Gaussian (Figure 2—figure supplement 1A) and exponential filters are low-pass or integrating filters, which smooth the stimulus (Figure 2—figure supplement 1B). The filters were not normalized. Bandpass filters were implemented as biphasic filters by either differentiating a Gaussian filter (Figure 2—figure supplement 1A) or by subtracting two Gaussian and/or exponential filters to form the positive (excitatory) and negative (inhibitory) filter lobes. Parameterizing biphasic filters as combinations of Gaussians and/or exponentials provides greater and more direct control over the timing and the gain of the filters’ excitatory and inhibitory lobes. Differentiating filters respond most strongly to changes in the stimulus, for instance, to the onsets or offsets of pulses (Figure 2—figure supplement 1B). The support parameter was initialized to be sufficiently long and was typically fixed during model fitting (Table 1).
Nonlinear transfer functions
Request a detailed protocolNonlinear transfer functions (nonlinearities) transform inputs to implement thresholding or saturation. The model employs two types of nonlinearities: a rectifying nonlinearity thresholds the input at the threshold value x0 and scales it with a gain :
(Figure 2—figure supplement 1D). In many cases, the threshold parameter was used to constrain signals to be purely positive or negative, and it was therefore often fixed to 0. A sigmoidal nonlinearity combines a soft threshold with saturation: with gain , shift , and minimal and maximal output y0 and , respectively (Figure 2—figure supplement 1E).
Linear transmission with delay
Request a detailed protocolNeuronal connections transmit neural activity linearly with delay and gain . Multiple inputs to a neuron are added. The gain is negative for inhibitory and positive for excitatory synapses and controls the input strength. The delay parameter corresponds to the delay that needs to be added to the input of a neuron to reproduce the timing of the output of that neuron. It includes axonal conduction or synaptic transmission delays but also other delay mechanisms, like low-pass filtering of the membrane voltage at the pre- and postsynapse (Creutzig et al., 2010; Zhou et al., 2019) or latencies arising from integration of inputs to the spiking threshold or from spike generation.
Adaptation
Request a detailed protocolAdaptation in AN1M and LN2M is implemented using differentiating filters (Figure 2—figure supplement 1A and B). Such filters produce adaptation via their broad inhibitory lobe, which suppresses subsequent responses (Nagel and Wilson, 2011). For AN1M and LN3M, adaptation effects were better captured using divisive normalization (Figure 2—figure supplement 1F). The input to the adaptation stage, , is low-pass filtered using an exponential filter to generate an adaptation signal that divides the input: . Here, x0 and control the strength of adaptation while the width of the exponential filter, , controls the timescale of adaptation. Both implementations of adaptation – differentiating filters and divisive normalization – can produce similar adaptation time courses. However, since adaptation with differentiating filters acts subtractively, the negative response components accumulate over time to a negative filter output (Figure 2—figure supplement 1F). For modeling adaptation on longer timescales, this can completely suppress responses to subsequent stimuli. This complete suppression does not occur with divisive normalization since it only scales the response.
Model neurons
The elementary computations described above were combined to reproduce the response of each neuron in the network – the firing rate patterns in the case of spiking neurons and the pattern of the rebound responses in the non-spiking LN5M (Figure 2E). All filters and nonlinearities are plotted in Figure 2A, and all parameter names and values are listed in Table 1 and graphically defined in Figure 2—figure supplement 1 and Figure 2—figure supplement 2. The model was implemented in MATLAB. Code for running the model is available at https://github.com/janclemenslab/cricketnet, (Clemens, 2021 copy archived at swh:1:rev:73ff788143c6afa3adbbb3e0f61f600a13052352).
AN1M (relay)
Request a detailed protocolAN1 is the input neuron of the network. It faithfully copies the pulse structure and adapts weakly (Figure 2E). These response properties were reproduced using a differentiating linear filter (Figure 2—figure supplement 1): the filter’s narrow and strong excitatory lobe captures the fast and faithful stimulus responses, while the broad inhibitory lobe captures the delayed suppressive effect of past stimulus epochs. AN1M’s filter was generated by concatenating two Gaussians as the excitatory and inhibitory lobes. AN1M has a sigmoidal nonlinearity that saturates responses to longer pulses. To capture the decrease of the response across pulses, we additionally included a divisive normalization stage.
LN2M (inhibition)
Request a detailed protocolLN2 responds reliably to individual pulses similar to AN1, but exhibits stronger adaptation during the pulse. This was reproduced using a differentiating filter, composed of a truncating Gaussian function as the excitatory lobe and an exponential function as the inhibitory lobe. A rectifying nonlinearity restricts outputs to be positive.
LN5M (post-inhibitory rebound)
Request a detailed protocolLN5 is a non-spiking neuron. It receives inhibitory input coupled to the spike activity of LN2 and generates a rebound depolarization when the inhibition terminates (Figure 2E). The inhibitory input from LN2 over a pulse saturates in LN5, which is modeled using a differentiating input filter followed by a rectifying nonlinearity that restricts the inputs to be negative. The rebound is then produced using a differentiating filter with a narrow excitatory lobe, which further low-pass filters the inhibitory input, and a broad inhibitory lobe, which generates the broad positive voltage deflections at the offset of negative voltage deviations. See Figure 2—figure supplement 1C for an illustration of the principle by which the rebound is produced in the model. The filter was generated by concatenating two exponential filters as negative and excitatory lobes.
LN3 (coincidence detection)
Request a detailed protocolLN3 responds strongly when two excitatory synaptic inputs arrive at the same time: a short-latency input from AN1 and a delayed input from LN5. In the neurophysiological data, the input from AN1 alone is often sufficient to drive spiking (e.g., to the first pulse of a train), suggesting that LN3 does not require two coincident inputs to spike. We therefore define LN3M as a linear adder (with a threshold) that receives stronger input from AN1M than from LN5M. From the LN5M responses, only the positive components corresponding to the rebound were passed on as the input to LN3M, which was separated from the negative components of the LN5M responses using a rectifying nonlinearity (Figure 2—figure supplement 1D). The sum of the two inputs is thresholded using a rectifying nonlinearity before being passed into an adaptation stage with divisive normalization. The adaptation after summation reduces LN3M firing over a pulse train, and a final rectifying nonlinearity ensures positive firing rates, by cutting of all negative responses that can occur in the model, for instance, due to filtering (Figure 2—figure supplement 1B).
LN4 (feature detection)
Request a detailed protocolFinally, LN4M adds excitatory input from LN3M and inhibitory input from LN2M, which sharpens its selectivity for pulse patterns as compared to LN3M. A rectifying nonlinearity restricts the output firing rate to be positive.
Prediction of phonotactic behavior from network responses
Request a detailed protocolFor regular pulse trains, the firing rate of LN4 (number of spikes per chirp/chirp duration) predicts phonotactic behavior very well (Schöneich et al., 2015; Figure 1E). We therefore used the firing rate of LN4M to predict behavioral responses from network output.
Model fitting
Request a detailed protocolModel parameters were optimized in two phases. First, the model parameters and structure were initialized by hand. This initialization established which computational steps were necessary to reproduce the key response features of the neurons in the network – their response dynamics and tuning – and determined initial parameter values. In the second phase, we used a genetic algorithm to tune the network parameters to optimally fit the data; see Clemens et al., 2014. To simplify fitting, we exploited the feed-forward topology of the network: instead of fitting all parameters simultaneously, we adopted a stepwise procedure, in which the model neurons were fitted in order of their position in the network. We started with fitting the parameters of the first neuron in the network, AN1M, and reproducing the firing rate traces of AN1M. In a seconds step, we held the parameters of AN1M constant and fitted parameters of LN2M. We then held the parameters of AN1M and LN2M constant and fitted the temporal dynamics of the rebound response in LN5M but not the inhibitory component, since the precise magnitude and dynamics of inhibitory components were highly variable across recordings and since the rebound constitutes the effective output of LN5. We then fitted the parameters of LN3M and finally of LN4M, again holding those of the upstream neurons constant. For these last two neurons, firing was very sparse and irregularly timed across trials, which complicated fitting. We therefore used information from the firing rate dynamics and from the tuning curves for optimizing the parameters of LN3M and LN4M by using mixed error functions: where is the mean square error between the predicted and actual traces, is the mean absolute error between the predicted and the actual tuning curves, and was set to 0.1 for LN3M and to 1.0 for the even sparser activity of LN4M. For the other neurons in the network (LN2M, LN3M, LN5M), only E was used (equivalent to w = 0).
The initial parameters for fitting at each step were drawn from an exponential distribution ranging between 0.1 and 10 around the hand-fitted parameters. The exponential distribution was chosen to have an even number of parameters above and below the hand-fitted values, that is, a uniform distribution after log scaling. While the fitting algorithm is stochastic by nature, it repeatedly found similar optimal parameter sets on independent initializations (not shown).
The model had a total of 55 parameters. Of these, nine parameters (marked by ‘*’ in Table 1) were fixed to the initialized values during fitting to facilitate convergence. For instance, the gain of the input to a neuron is equally set by the output gain of the presynaptic neuron’s nonlinearity, , or by the synaptic input gain , and therefore only one of these parameters needs to be optimized. Or the support parameter of an exponential filter sets an upper bound for the timescale of that filter, which in turn is given by the filter’s time constant. In that case, the support was typically set to a sufficiently high value and fixed. Lastly, some of the thresholds were fixed to zero since their sole role was to restrict firing rates to be positive.
Evaluation of model performance
Request a detailed protocolAfter the fitting procedure, the model performance was evaluated based on the tuning curves and the response traces. The fit for the tuning curves is given by (Figure 2H), where is the root-mean-squared error and an and bn are the tuning curves from the experimental data and the model, each normalized by the maximum of the tuning curve from the data: , . This measure of fit is sensitive to the addition of a constant offset and to a mismatch in the scale of responses between the model and the data, but is independent of the overall scale of the responses in the data. The fit for the response traces is given by the squared Pearson correlation coefficient:
where x and y are the response traces from the model and the data, respectively, and μ and are the mean and the standard deviation. The was calculated for different timescales (Figure 2G) by low-pass filtering both the prediction and the response with rectangular windows with durations ranging between 1 and 25 ms before computing the . At short timescales, this measure is sensitive to fine details in the firing rate dynamics, while at longer timescales, the measure reflects the match in coarse features of the firing dynamics, for example, the spike counts per pulse.
Generation of model variants for the analysis of the parameter space
Request a detailed protocolTo determine the range of preference types the network can produce (Figure 4A–E), we generated models with random parameter sets taken from a range around that obtained from the fit to G. bimaculatus. For that, we chose 45 of the 55 model parameters. The 10 parameters fixed for this analysis (marked by ‘†’ in Table 1) had also been fixed for fitting (see above), but we allowed more of the parameters of AN1’s filter and output nonlinearity to change. All 45 parameters except the synaptic delay parameters were taken from a hypercube spanning the range between 1/10 and 10-fold around the original parameter set, spaced logarithmically such that the fraction of random parameter values below and above the original value was similar. Delay parameters in the model (see Table 1) were allowed to range uniformly between 1 and 21 ms, irrespective of the original parameter value. The delay parameters correspond to the delay added to the inputs to a neuron required to produce the desired timing of the output of that neuron and include axonal conduction and synaptic transmission delays, delays induced by low-pass filtering at the pre- and postsynapse, and delays from the integration of inputs to the spiking threshold and from spike generation. To ensure uniform sampling from the 45-dimensional parameter space, we used a quasi-random sampling scheme based on the Sobol set. We used the sobolset function in MATLAB with the following parameters: skip 1e3, leap 1e2, scramble MatousekAffineOwen. Using this approach, we generated 5 million different model variants. Of these, 9% (450,000 models) were responsive and selective, that is, they responded to at least one pulse train pattern and did not produce the same response for all patterns tested. Initial tests with extended parameter ranges yielded qualitatively similar phenotypic variability but produced many more unresponsive or unselective models.
Sensitivity analysis
Request a detailed protocolFor the sensitivity analysis (Figure 4F–H, Figure 4—figure supplement 1), we changed either single parameters or systematically varied pairs of parameters on a grid. We used the same set of 45 parameters as in the analysis of the parameter space above (see Table 1). Single-parameter sweeps were generated as 21 logarithmically spaced values between 1/100 and 100-fold around the original value, except for delay parameters, which were generated as 21 values ranging between 1 and 41 ms. While these parameter ranges may appear rather large, in particular for the input delay parameters, they are chosen to facilitate the detection of key parameters controlling the network’s tuning. Their specific and meaningful effects on the model tuning were confirmed and further analyzed within the more restricted parameter range also chosen for the analysis of the network’s phenotypic flexibility above.
For each model, we calculated the response fields for pulse durations and pauses between 1 and 80 ms (2 ms spacing, 1600 stimuli per response field), with a pulse train duration of 600 ms and a chirp pause of 200 ms. To quantify the model’s sensitivity to changes in each parameter, we first calculated the correlation distance (1 – Pearson’s correlation coefficient) between the response fields from the original model and each modified model in the parameter sweep (Figure 4F). By using the correlation coefficient, our sensitivity analysis is robust to trivial changes in the response field like scaling or the addition of a constant to all responses. The average correlation distance over a parameter sweep is then taken as a measure of how much changing a parameter affects the model output.
For some parameters, models produced constant output (e.g., all zeros) over most of the parameter sweep with only one or two step-like changes in the response fields, leading to artificially high sensitivity values. We found that we could reliably exclude such parameters by calculating the median difference in the correlation distance between consecutive parameter values over the parameter sweep and requiring this quantity to be larger than 0.005:
where is the absolute value, and denotes the correlation distance (1 – Pearson’s correlation coefficient) between the response field from the original model, , and from a model with the modified parameter, . This approach also excluded parameters whose change produced largely untuned models over the sweep. We ensured that none of our results crucially depended on these criteria. For instance, even without the above constraint, the parameters of LN5M and LN3M were still among the top ranked.
For the sensitivity analysis over parameter pairs (Figure 4—figure supplement 1), value grids were generated using the same value ranges as for single parameters for each parameter in the pair, resulting in 21 × 21 = 441 model variants for each of the 1035 unique, unordered parameter pairs.
Characterization of response fields
Request a detailed protocolThe response fields known in crickets typically have a roughly ellipsoid shape with a single peak. To assess the extent by which the response fields produced by randomizing the model parameters match these properties, we determined the fraction of fields with a single dominant peak (vs. multiple dominant peaks) and the match of the response fields with a best-fitting ellipse. Dominant peaks were defined as local maxima in the response field exceeding 0.5 of the global maximum value and separated by a trough that is smaller than 0.75 of the value of the lower of the two peaks. This measure is robust to the existence of either small or poorly separated local maxima. Ellipses were fitted to the binarized fields to be robust to the steps in the response fields by the stimulus structure (Figure 3—figure supplement 1). Binarization was performed by thresholding each response field at 50% of its maximum value. We then fitted a two-dimensional Gaussian to the superthreshold coordinate values using MATLAB’s fitgmdist and created a best-fitted prediction by setting all pixels inside the 99.9% probability threshold to 1.0. Results are robust to the specific choice of this value. The match between the binarized response field and the binarized ellipsoid was estimated via the Jaccard similarity, SJ , between the two binary images. The Jaccard similarity corresponds to the fraction of non-zero pixels that differ between the binarized response fields and the binarized ellipsoid or the intersection over union: SJ = |P ∩ T |/|P ∪ T |, where | … | denotes the set size and ∩ and ∪ correspond to intersection and union, respectively. Using alternative error measures, like the Hamming distance, produced similar results.
Deviations from a perfect ellipsoid, like more rectangular shapes or asymmetrical shapes, can reduce the match (Figure 4—figure supplement 2). Manual inspection of response fields revealed that most models with a Jaccard similarity SJ >0.5 were well described by an ellipse, and we therefore chose this SJ as a threshold for considering a response field well fitted by an ellipsoid. For the asymmetry index, we extracted the length of the major and minor axis of the best-fitted ellipse as the eigenvalues of the covariance matrix and took their ratio.
Identification of preference types
Request a detailed protocolResponse fields were assigned to one of the four principal types in Figure 1B based on two inclusion criteria: first, the angle of the main axis had to fall within ± 10° of that of the prototypical angle (–45° for period, 0° for duration, 45° for duty cycle, 90° for pause). Second, the response field had to be sufficiently selective for the designated stimulus parameter and tolerant for the orthogonal stimulus parameter. For instance, a response field was determined to be pause tuned if the angle was ~90 ± 10°, and if it was selective for pause and tolerant for duration. We ensured the validity of these criteria through visual inspection of many response fields (see examples in Figure 4E). The orientation angle was calculated by fitting a line to the ‘‘ridge of the response field’’ (the set of pause and duration values that elicited maximal responses). We first identified well-responded stimuli as those with response values exceeding 50% of the maximal response value for that field. To make the fits more robust, we determined whether the set of well-responded stimuli extended more along the pause or the duration axis and selected pause and duration values for the fit as follows: if the response field was most extended along the pause axis, then we identified the preferred duration at each pause value for which the response field was above the 50% threshold. For a response field that extended more along the duration axis, we identified the preferred pause at each duration value exceeding the 50% threshold. We then fitted a line to the resulting set of duration and pause values and took the inverse tangent of that line’s slope as the orientation angle.
Data availability
The data used for fitting the model can be found at https://data.goettingen-research-online.de/dataverse/cricketnet. Code for running the model can be found at https://github.com/janclemenslab/cricketnet.
References
-
Duration Tuning across VertebratesThe Journal of Neuroscience 32:6373–6390.https://doi.org/10.1523/JNEUROSCI.5624-11.2012
-
Short-term depression, temporal summation, and onset inhibition shape interval tuning in midbrain neuronsThe Journal of Neuroscience 34:14272–14287.https://doi.org/10.1523/JNEUROSCI.2299-14.2014
-
Acoustic pattern recognition and courtship songs: Insights from insectsAnnual Review of Neuroscience 42:129–147.https://doi.org/10.1146/annurev-neuro-080317-061839
-
A universal model for spike-frequency adaptationNeural Computation 15:2523–2564.https://doi.org/10.1162/089976603322385063
-
Spike-frequency adaptation generates intensity invariance in a primary auditory interneuronJournal of Computational Neuroscience 24:113–136.https://doi.org/10.1007/s10827-007-0044-8
-
Phenotypic variation and covariation indicate high evolvability of acoustic communication in cricketsJournal of Evolutionary Biology 28:1656–1669.https://doi.org/10.1111/jeb.12686
-
Common circuit design in fly and mammalian motion visionNature Neuroscience 18:1067–1076.https://doi.org/10.1038/nn.4050
-
Decoding temporal information: A model based on short-term synaptic plasticityThe Journal of Neuroscience 20:1129–1141.
-
Pulse-Rate Recognition in an Insect: Evidence of a Role for Oscillatory NeuronsJournal of Comparative Physiology A 192:113–121.https://doi.org/10.1007/s00359-005-0053-x
-
Computational principles underlying the recognition of acoustic signals in insectsJournal of Computational Neuroscience 35:75–85.https://doi.org/10.1007/s10827-013-0441-0
-
Feature extraction and integration underlying perceptual decision making during courtship behaviorThe Journal of Neuroscience 33:12136–12145.https://doi.org/10.1523/JNEUROSCI.0724-13.2013
-
DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizationsNeuropsychopharmacology 44:859–868.https://doi.org/10.1038/s41386-018-0303-6
-
Temporal pattern processing in songbirdsCurrent Opinion in Neurobiology 28:179–187.https://doi.org/10.1016/j.conb.2014.08.003
-
Auditory pattern recognition and steering in the cricket Teleogryllus oceanicusPhysiological Entomology 39:19–27.https://doi.org/10.1111/phen.12043
-
Distinct roles for IT and IH in controlling the frequency and timing of rebound spike responsesThe Journal of Physiology 589:5391–5413.https://doi.org/10.1113/jphysiol.2011.215632
-
Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory PathwaysPLOS Computational Biology 9:10.https://doi.org/10.1371/journal.pcbi.1003251
-
Sound Rhythms Are Encoded by Postinhibitory Rebound Spiking in the Superior Paraolivary NucleusThe Journal of Neuroscience 31:12566–12578.https://doi.org/10.1523/JNEUROSCI.2450-11.2011
-
Short-Term Synaptic Plasticity as a Temporal FilterTrends in Neurosciences 24:381–385.https://doi.org/10.1016/s0166-2236(00)01835-x
-
Temporal scales of auditory objects underlying birdsong vocal recognitionThe Journal of the Acoustical Society of America 124:1350–1359.https://doi.org/10.1121/1.2945705
-
Functional consequences of animal-to-animal variation in circuit parametersNature Neuroscience 12:1424–1430.https://doi.org/10.1038/nn.2404
-
Multivariate female preference tests reveal latent perceptual biasesProceedings. Biological Sciences 283:20161972.https://doi.org/10.1098/rspb.2016.1972
-
Auditory processing at two time scales by the cricket Gryllus bimaculatusThe Journal of Experimental Biology 215:1681–1690.https://doi.org/10.1242/jeb.065466
-
Receiver psychology and the design of animal signalsTrends in Neurosciences 16:430–436.https://doi.org/10.1016/0166-2236(93)90068-W
-
Song pattern recognition in crickets based on a delay-line and coincidence-detector mechanismProceedings. Biological Sciences 284:20170745.https://doi.org/10.1098/rspb.2017.0745
-
KCNQ4 K+ channels tune mechanoreceptors for normal touch sensation in mouse and manNature Neuroscience 15:138–145.https://doi.org/10.1038/nn.2985
-
Filtering of temporal parameters of the calling song by cricket females of two closely related species: a behavioral analysisJournal of Comparative Physiology A 180:621–630.https://doi.org/10.1007/s003590050078
-
Acoustic feature extraction by cross-correlation in crickets?Journal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology 189:589–598.https://doi.org/10.1007/s00359-003-0438-7
-
Walking in Fourier’s space: algorithms for the computation of periodicities in song patterns by the cricket Gryllus bimaculatusJournal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology 195:971–987.https://doi.org/10.1007/s00359-009-0473-0
-
Time and timing in the acoustic recognition system of cricketsFrontiers in Physiology 5:286.https://doi.org/10.3389/fphys.2014.00286
-
Divergence in male cricket song and female preference functions in three allopatric sister speciesJournal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology 202:347–360.https://doi.org/10.1007/s00359-016-1083-2
-
Which model to use for cortical spiking neurons?IEEE Transactions on Neural Networks 15:1063–1070.https://doi.org/10.1109/TNN.2004.832719
-
Modular timer networks: abdominal interneurons controlling the chirp and pulse pattern in a cricket calling songJournal of Comparative Physiology A 206:921–938.https://doi.org/10.1007/s00359-020-01448-0
-
Neural mechanisms underlying the evolvability of behaviourPhilosophical Transactions of the Royal Society B 366:2086–2099.https://doi.org/10.1098/rstb.2010.0336
-
When Sound Stops: Offset Responses in the Auditory SystemTrends in Neurosciences 41:712–728.https://doi.org/10.1016/j.tins.2018.08.009
-
Calling song recognition in female crickets: temporal tuning of identified brain neurons matches behaviorThe Journal of Neuroscience 32:9601–9612.https://doi.org/10.1523/JNEUROSCI.1170-12.2012
-
Pattern recognition in field crickets: concepts and neural evidenceJournal of Comparative Physiology A 201:73–85.https://doi.org/10.1007/s00359-014-0949-4
-
Auditory Temporal Computation: Interval Selectivity Based on Post-Inhibitory ReboundJournal of Computational Neuroscience 13:125–142.https://doi.org/10.1023/a:1020162207511
-
Degenerate coding in neural systemsJournal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology 191:995–1010.https://doi.org/10.1007/s00359-005-0026-0
-
BookVision: A Computational Investigation into the Human Representation and Processing of Visual InformationHenry Holt & Company.
-
Neural representation of calling songs and their behavioral relevance in the grasshopper auditory systemFrontiers in Systems Neuroscience 8:183.https://doi.org/10.3389/fnsys.2014.00183
-
Biophysical mechanisms underlying olfactory receptor neuron dynamicsNature Neuroscience 14:208–216.https://doi.org/10.1038/nn.2725
-
Using Neural Circuit Interrogation in Rodents to Unravel Human Speech DecodingFrontiers in Neural Circuits 14:2.https://doi.org/10.3389/fncir.2020.00002
-
Neuronal energy consumption: biophysics, efficiency and evolutionCurrent Opinion in Neurobiology 41:129–135.https://doi.org/10.1016/j.conb.2016.09.004
-
Multivariate sexual selection in a rapidly evolving speciation phenotypeProceedings of the Royal Society B 280:20130482.https://doi.org/10.1098/rspb.2013.0482
-
Queer Current and Pacemaker: The Hyperpolarization-Activated Cation Current in NeuronsAnnual Review of Physiology 58:299–327.https://doi.org/10.1146/annurev.ph.58.030196.001503
-
Neural networks predict response biases of female túngara frogsProceedings of the Royal Society of London. Series B 265:279–285.https://doi.org/10.1098/rspb.1998.0293
-
Pyramidal neurons switch from integrators in vitro to resonators under in vivo-like conditionsJournal of Neurophysiology 100:3030–3042.https://doi.org/10.1152/jn.90634.2008
-
Similar network activity from disparate circuit parametersNature Neuroscience 7:1345–1352.https://doi.org/10.1038/nn1352
-
Evolving olfactory systems on the flyTrends in Genetics 26:307–316.https://doi.org/10.1016/j.tig.2010.04.004
-
Firing-rate resonances in the peripheral auditory system of the cricket, Gryllus bimaculatusJournal of Comparative Physiology A 201:1075–1090.https://doi.org/10.1007/s00359-015-1036-1
-
Weighing the Evidence in Peters’ Rule: Does Neuronal Morphology Predict Connectivity?Trends in Neurosciences 40:63–71.https://doi.org/10.1016/j.tins.2016.11.007
-
Species-specificity of temporal processing in the auditory midbrain of gray treefrogs: interval-counting neuronsJournal of Comparative Physiology A 201:485–503.https://doi.org/10.1007/s00359-015-0997-4
-
Calling song signals and temporal preference functions in the cricket Teleogryllus leoJournal of Comparative Physiology A 198:817–825.https://doi.org/10.1007/s00359-012-0751-0
-
Perceptual Biases and Mate ChoiceAnnual Review of Ecology, Evolution, and Systematics 44:437–459.https://doi.org/10.1146/annurev-ecolsys-110512-135901
-
High background noise shapes selective auditory filters in a tropical cricketThe Journal of Experimental Biology 214:1754–1762.https://doi.org/10.1242/jeb.053819
-
On hearing with more than one ear: lessons from evolutionNature Neuroscience 12:692–697.https://doi.org/10.1038/nn.2325
-
An auditory feature detection circuit for sound pattern recognitionScience Advances 1:e1500325.https://doi.org/10.1126/sciadv.1500325
-
Song recognition by temporal cues in a group of closely related bushcricket species (genus Tettigonia)Journal of Comparative Physiology A 183:401–410.https://doi.org/10.1007/s003590050266
-
Two-Dimensional Time Coding in the Auditory BrainstemThe Journal of Neuroscience 25:9978–9988.https://doi.org/10.1523/JNEUROSCI.2666-05.2005
-
Neural networks with dynamic synapsesNeural Computation 10:821–835.https://doi.org/10.1162/089976698300017502
-
Robustness and evolvability: a paradox resolvedProceedings of the Royal Society B 275:91–100.https://doi.org/10.1098/rspb.2007.1137
-
The molecular origins of evolutionary innovationsTrends in Genetics 27:397–410.https://doi.org/10.1016/j.tig.2011.06.002
-
Resonant neurons and bushcricket behaviourJournal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology 193:285–288.https://doi.org/10.1007/s00359-006-0199-1
-
Genetic coupling of signal and preference facilitates sexual isolation during rapid speciationProceedings. Biological Sciences 286:20191607.https://doi.org/10.1098/rspb.2019.1607
-
Predicting neuronal dynamics with a delayed gain control modelPLOS Computational Biology 15:e1007484.https://doi.org/10.1371/journal.pcbi.1007484
Article and author information
Author details
Funding
Biotechnology and Biological Sciences Research Council (BB/J01835X/1)
- Berthold Hedwig
- Konstantinos Kostarakos
Royal Society (Newton International Fellowship)
- Konstantinos Kostarakos
Leibniz-Gemeinschaft (SAW 2012-MfN-3)
- R Matthias Hennig
Deutsche Forschungsgemeinschaft (HE 2812/4-1)
- R Matthias Hennig
Deutsche Forschungsgemeinschaft (HE 2812/5-1)
- R Matthias Hennig
Deutsche Forschungsgemeinschaft (CL 596/1-1)
- Jan Clemens
Deutsche Forschungsgemeinschaft (CL 596/2-1)
- Jan Clemens
Deutsche Forschungsgemeinschaft (SCHO 1822/3-1)
- Stefan Schöneich
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Biswa Sengupta for discussions during an early phase of the project.
Copyright
© 2021, Clemens et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 905
- views
-
- 132
- downloads
-
- 11
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Evolutionary Biology
The majority of highly polymorphic genes are related to immune functions and with over 100 alleles within a population, genes of the major histocompatibility complex (MHC) are the most polymorphic loci in vertebrates. How such extraordinary polymorphism arose and is maintained is controversial. One possibility is heterozygote advantage (HA), which can in principle maintain any number of alleles, but biologically explicit models based on this mechanism have so far failed to reliably predict the coexistence of significantly more than ten alleles. We here present an eco-evolutionary model showing that evolution can result in the emergence and maintenance of more than 100 alleles under HA if the following two assumptions are fulfilled: first, pathogens are lethal in the absence of an appropriate immune defence; second, the effect of pathogens depends on host condition, with hosts in poorer condition being affected more strongly. Thus, our results show that HA can be a more potent force in explaining the extraordinary polymorphism found at MHC loci than currently recognized.
-
- Cancer Biology
- Evolutionary Biology
In growing cell populations such as tumours, mutations can serve as markers that allow tracking the past evolution from current samples. The genomic analyses of bulk samples and samples from multiple regions have shed light on the evolutionary forces acting on tumours. However, little is known empirically on the spatio-temporal dynamics of tumour evolution. Here, we leverage published data from resected hepatocellular carcinomas, each with several hundred samples taken in two and three dimensions. Using spatial metrics of evolution, we find that tumour cells grow predominantly uniformly within the tumour volume instead of at the surface. We determine how mutations and cells are dispersed throughout the tumour and how cell death contributes to the overall tumour growth. Our methods shed light on the early evolution of tumours in vivo and can be applied to high-resolution data in the emerging field of spatial biology.