Resonating neurons stabilize heterogeneous grid-cell networks

  1. Divyansh Mittal
  2. Rishikesh Narayanan  Is a corresponding author
  1. Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, India

Abstract

A central theme that governs the functional design of biological networks is their ability to sustain stable function despite widespread parametric variability. Here, we investigated the impact of distinct forms of biological heterogeneities on the stability of a two-dimensional continuous attractor network (CAN) implicated in grid-patterned activity generation. We show that increasing degrees of biological heterogeneities progressively disrupted the emergence of grid-patterned activity and resulted in progressively large perturbations in low-frequency neural activity. We postulated that targeted suppression of low-frequency perturbations could ameliorate heterogeneity-induced disruptions of grid-patterned activity. To test this, we introduced intrinsic resonance, a physiological mechanism to suppress low-frequency activity, either by adding an additional high-pass filter (phenomenological) or by incorporating a slow negative feedback loop (mechanistic) into our model neurons. Strikingly, CAN models with resonating neurons were resilient to the incorporation of heterogeneities and exhibited stable grid-patterned firing. We found CAN models with mechanistic resonators to be more effective in targeted suppression of low-frequency activity, with the slow kinetics of the negative feedback loop essential in stabilizing these networks. As low-frequency perturbations (1/f noise) are pervasive across biological systems, our analyses suggest a universal role for mechanisms that suppress low-frequency activity in stabilizing heterogeneous biological networks.

Introduction

Stability of network function, defined as the network’s ability to elicit robust functional outcomes despite perturbations to or widespread variability in its constitutive components, is a central theme that governs the functional design of several biological networks. Biological systems exhibit ubiquitous parametric variability spanning different scales of organization, quantified through statistical heterogeneities in the underlying parameters. Strikingly, in spite of such large-scale heterogeneities, outputs of biological networks are stable and are precisely tuned to meet physiological demands. A central question that spans different scales of organization is on the ability of biological networks to achieve physiological stability in the face of ubiquitous parametric variability (Turrigiano and Nelson, 2000; Edelman and Gally, 2001; Maslov and Sneppen, 2002; Stelling et al., 2004; Marder and Goaillard, 2006; Barkai and Shilo, 2007; Kitano, 2007; Félix and Barkoulas, 2015).

Biological heterogeneities are known to play critical roles in governing stability of network function, through intricate and complex interactions among mechanisms underlying functional emergence (Edelman and Gally, 2001; Renart et al., 2003; Marder and Goaillard, 2006; Tikidji-Hamburyan et al., 2015; Mishra and Narayanan, 2019; Rathour and Narayanan, 2019). However, an overwhelming majority of theoretical and modeling frameworks lack the foundation to evaluate the impact of such heterogeneities on network output, as they employ unnatural homogeneous networks in assessing network function. The paucity of heterogeneous network frameworks is partly attributable to the enormous analytical or computational costs involved in assessing heterogeneous networks. In this study, we quantitatively address questions on the impact of distinct forms of biological heterogeneities on the functional stability of a two-dimensional continuous attractor network (CAN), which has been implicated in the generation of patterned neuronal activity in grid cells of the medial entorhinal cortex (Burak and Fiete, 2009; Knierim and Zhang, 2012; Couey et al., 2013; Domnisoru et al., 2013; Schmidt-Hieber and Häusser, 2013; Yoon et al., 2013; Tukker et al., 2021). Although the continuous attractor framework has offered insights about information encoding across several neural circuits (Samsonovich and McNaughton, 1997; Seung et al., 2000; Renart et al., 2003; Wills et al., 2005; Burak and Fiete, 2009; Knierim and Zhang, 2012; Schmidt-Hieber and Häusser, 2013; Yoon et al., 2013; Kim et al., 2017), the fundamental question on the stability of 2D CAN models in the presence of biological heterogeneities remains unexplored. Here, we systematically assessed the impact of biological heterogeneities on stability of emergent spatial representations in a 2D CAN model and unveiled a physiologically plausible neural mechanism that promotes stability despite the expression of heterogeneities.

We first developed an algorithm to generate virtual trajectories that closely mimicked animal traversals in an open arena, to provide better computational efficiency in terms of covering the entire arena within shorter time duration. We employed these virtual trajectories to drive a rate-based homogeneous CAN model that elicited grid-patterned neural activity (Burak and Fiete, 2009) and systematically introduced different degrees of three distinct forms of biological heterogeneities. The three distinct forms of biological heterogeneities that we introduced, either individually or together, were in neuronal intrinsic properties, in afferent inputs carrying behavioral information and in local-circuit synaptic connectivity. We found that the incorporation of these different forms of biological heterogeneities disrupted the emergence of grid-patterned activity by introducing perturbations in neural activity, predominantly in low-frequency components. In the default model where neurons were integrators, grid patterns and spatial information in neural activity were progressively lost with increasing degrees of biological heterogeneities and were accompanied by progressive increases in low-frequency perturbations.

As heterogeneity-induced perturbations to neural activity were predominantly in the lower frequencies, we postulated that suppressing low-frequency perturbations could ameliorate the disruptive impact of biological heterogeneities on grid-patterned activity. We recognized intrinsic neuronal resonance as an established biological mechanism that suppresses low-frequency components, effectuated by the expression of resonating conductances endowed with specific biophysical characteristics (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008). Consequently, we hypothesized that intrinsic neuronal resonance could stabilize the heterogeneous grid-cell network through targetted suppression of low-frequency perturbations. To test this hypothesis, we developed two distinct strategies to introduce intrinsic resonance in our rate-based neuronal model to mimic the function of resonating conductances in biological neurons: (1) a phenomenological approach where an additional tunable high-pass filter (HPF) was incorporated into single-neuron dynamics and (2) a mechanistic approach where resonance was realized through a slow negative feedback loop akin to the physiological mechanism behind neuronal intrinsic resonance (Hutcheon and Yarom, 2000). We confirmed that the emergence of grid-patterned activity was not affected by replacing all the integrator neurons in the homogeneous CAN model with theta-frequency resonators (either phenomenological or mechanistic). We systematically incorporated different forms of biological heterogeneities into the 2D resonator CAN model and found that intrinsic neuronal resonance stabilized heterogeneous neural networks, through suppression of low-frequency components of neural activity. Although this stabilization was observed with both phenomenological and mechanistic resonator networks, the mechanistic resonator was extremely effective in suppressing low-frequency activity without introducing spurious high-frequency components into neural activity. Importantly, we found that the slow kinetics of the negative feedback loop was essential in stabilizing CAN networks built with mechanistic resonators.

Together, our study unveils an important role for intrinsic neuronal resonance in stabilizing network physiology through the suppression of heterogeneity-induced perturbations in low-frequency components of network activity. Our analyses suggest that intrinsic neuronal resonance constitutes a cellular-scale activity-dependent negative feedback mechanism, a specific instance of a well-established network motif that effectuates stability and suppresses perturbations across different biological networks (Savageau, 1974; Becskei and Serrano, 2000; Thattai and van Oudenaarden, 2001; Austin et al., 2006; Dublanche et al., 2006; Raj and van Oudenaarden, 2008; Lestas et al., 2010; Cheong et al., 2011; Voliotis et al., 2014). As the dominance of low-frequency perturbations is pervasive across biological networks (Hausdorff and Peng, 1996; Gilden, 2001; Gisiger, 2001; Ward, 2001; Buzsaki, 2006), we postulate that mechanisms that suppress low-frequency components could be a generalized route to stabilize heterogeneous biological networks.

Results

The rate-based CAN model consisting of a 2D neural sheet (default size: 60 × 60 = 3600 neurons) with the default integrator neurons for eliciting grid-cell activity was adopted from Burak and Fiete, 2009. The sides of this neural sheet were connected, yielding a toroidal (or periodic) network configuration. Each neuron in the CAN model received two distinct sets of synaptic inputs, one from other neurons within the network and another feed-forward afferent input that was dependent on the velocity of the virtual animal. The movement of the virtual animal was modeled to occur in a circular 2D spatial arena (Figure 1A; Real trajectory) and was simulated employing recordings of rat movement (Hafting et al., 2005) to replicate earlier results (Burak and Fiete, 2009). However, the real trajectory spanned 590 s of real-world time and required considerable simulation time to sufficiently explore the entire arena, essential for computing high-resolution spatial activity maps. Consequently, the computational costs required for exploring the parametric space of the CAN model, with different forms of network heterogeneities in networks of different sizes and endowed with distinct kinds of neurons, were exorbitant. Therefore, we developed a virtual trajectory, mimicking smooth animal movement within the arena, but with sharp turns at the borders.

Figure 1 with 1 supplement see all
A fast virtual trajectory developed for simulating rodent run in a two-dimensional circular arena elicits grid-cell activity in a continuous attractor network (CAN) model.

(A) Panels within rectangular box: simulation of a CAN model (60 × 60 neural network) using a 589 s long real trajectory from a rat (Hafting et al., 2005) yielded grid-cell activity. Other panels: A virtual trajectory (see Materials and methods) was employed for simulating a CAN model (60 × 60 neural network) for different time points. The emergent activity patterns for nine different run times (Trun) of the virtual animal are shown to yield grid-cell activity. Top subpanels show color-coded neural activity through the trajectory, and bottom subpanels represent a smoothened spatial profile of neuronal activity shown in the respective top subpanels. (B) Pearson’s correlation coefficient between the spatial autocorrelation of rate maps using the real trajectory and the spatial autocorrelation of rate maps from the virtual trajectory for the nine different values of Trun, plotted for all neurons (n = 3600) in the network.

Our analyses demonstrated the ability of virtual trajectories to cover the entire arena with lesser time (Figure 1A; Virtual trajectory), while not compromising on the accuracy of the spatial activity maps constructed from this trajectory in comparison to the maps obtained with the real trajectory (Figure 1B, Figure 1—figure supplement 1). Specifically, the correlation values between the spatial autocorrelation of rate maps of individual grid cells for real trajectory and that from the virtual trajectory at different rat runtimes (Trun) were high across all measured runtimes (Figure 1B). These correlation values showed a saturating increase with increase in Trun. As there was no substantial improvement in accuracy beyond Trun = 100 s, we employed Trun = 100 s for all simulations (compared to the 590 s of the real trajectory). Therefore, our virtual trajectory covered similar areas in approximately six times lesser duration (Figure 1A), which reduced the simulation duration by a factor of ~10 times when compared to the real rat trajectory, while yielding similar spatial activity maps (Figure 1B, Figure 1—figure supplement 1). Our algorithm also allowed us to generate fast virtual trajectories mimicking rat trajectories in open arenas of different shapes (Circle: Figure 1A; Square: Figure 2E).

Biologically prevalent network heterogeneities disrupt the emergence of grid-cell activity in CAN models.

(A) Intrinsic heterogeneity was introduced by setting the integration time constant (τ) of each neuron to a random value picked from a uniform distribution, whose range was increased to enhance the degree of intrinsic heterogeneity. The values of τ for the 3600 neurons in the 60 × 60 CAN model are shown for 5 degrees of intrinsic heterogeneity. (B) Afferent heterogeneity was introduced by setting the velocity-scaling factor (α) of each neuron to a random value picked from a uniform distribution, whose range was increased to enhance the degree of intrinsic heterogeneity. The values of α for the 3600 neurons are shown for 5 degrees of afferent heterogeneity. (C) Synaptic heterogeneity was introduced as an additive jitter to the intra-network Mexican hat connectivity matrix, with the magnitude of additive jitter defining the degree of synaptic heterogeneity. Plotted are the root mean square error (RMSE) values computed between the connectivity matrix of ‘no jitter’ case and that for different degrees of synaptic heterogeneities, across all synapses. (D) Illustration of a one-dimensional slice of synaptic strengths with different degrees of heterogeneity, depicted for Mexican-hat connectivity of a given cell to 60 other cells in the network. (E) Top left, virtual trajectory employed to obtain activity patterns of the CAN model. Top center, Example rate maps of grid-cell activity in a homogeneous CAN model. Top right, smoothed version of the rate map. Rows 2–4: smoothed version of rate maps obtained from CAN models endowed with five different degrees (increasing left to right) of disparate forms (Row 2: intrinsic; Row 3: afferent; Row 4: synaptic; and Row 5: all three heterogeneities together) of heterogeneities. (F) Percentage change in the grid score of individual neurons (n = 3600) in networks endowed with the four forms and five degrees of heterogeneities, compared to the grid score of respective neurons in the homogeneous network.

Biologically prevalent network heterogeneities disrupted the emergence of grid-cell activity in CAN models

The simulated CAN model (Figure 1) is an idealized homogeneous network of intrinsically identical neurons, endowed with precise local connectivity patterns and identical response properties for afferent inputs. Although this idealized CAN model provides an effective phenomenological framework to understand and simulate grid-cell activity, the underlying network does not account for the ubiquitous biological heterogeneities that span neuronal intrinsic properties and synaptic connectivity patterns. Would the CAN model sustain grid-cell activity in a network endowed with different degrees of biological heterogeneities in neuronal intrinsic properties and strengths of afferent/local synaptic inputs?

To address this, we systematically introduced three distinct forms of biological heterogeneities into the rate-based CAN model. We introduced intrinsic heterogeneity by randomizing the value of the neuronal integration time constant τ across neurons in the network. Within the rate-based framework employed here, randomization of τ reflected physiological heterogeneities in neuronal excitability properties, and larger spans of τ defined higher degrees of intrinsic heterogeneity (Table 1; Figure 2A). Afferent heterogeneity referred to heterogeneities in the coupling of afferent velocity inputs onto individual neurons. Different degrees of afferent heterogeneities were introduced by randomizing the velocity scaling factor (α) in each neuron through uniform distributions of different ranges (Table 1; Figure 2B). Synaptic heterogeneity involved the local connectivity matrix, and was introduced as additive jitter to the default center-surround connectivity matrix. Synaptic jitter was independently sampled for each connection, from a uniform distribution whose differential span regulated the associated higher degree of heterogeneity (Table 1; Figure 2C). In homogeneous networks, τ (=10 ms), α (=45) and Wij (center-shifted Mexican hat connectivity without jitter) were identical for all neurons in the network. We assessed four distinct sets of heterogeneous networks: three sets endowed with one of intrinsic, afferent and synaptic heterogeneities, and a fourth where all three forms of heterogeneities were co-expressed. When all heterogeneities were present together, all three sets of parameters were set randomly with the degree of heterogeneity defining the bounds of the associated distribution (Table 1).

We found that the incorporation of any of the three forms of heterogeneities into the CAN model resulted in the disruption of grid pattern formation, with the deleterious impact increasing with increasing degree of heterogeneity (Figures 2E and 3). Quantitatively, we employed grid score (Fyhn et al., 2004; Hafting et al., 2005) to measure the emergence of pattern formation in the CAN model activity, and found a reduction in grid score with increase in the degree of each form of heterogeneity (Figure 2F). We found a hierarchy in the disruptive impact of different types of heterogeneities, with synaptic heterogeneity producing the largest reduction to grid score, followed by afferent heterogeneity. Intrinsic heterogeneity was the least disruptive in the emergence of grid-cell activity, whereby a high degree of heterogeneity was required to hamper grid pattern formation (Figure 2E,F). Simultaneous progressive introduction of all three forms of heterogeneities at multiple degrees resulted in a similar progression of grid-pattern disruption (Figure 2E,F). The introduction of the highest degree of all heterogeneities into the CAN model resulted in a complete loss of patterned activity and a marked reduction in the grid score values of all neurons in the network (Figure 2E,F). We confirmed that these results were not artifacts of specific network initialization choices by observing similar grid score reductions in five additional trials with distinct initializations of CAN models endowed with all three forms of heterogeneities (Figure 3—figure supplement 1F). In addition, to rule out the possibility that these conclusions were specific to the choice of the virtual trajectory employed, we repeated our simulations and analyses with a different virtual trajectory (Figure 3—figure supplement 2A) and found similar reductions in grid score with increase in the degree of all heterogeneities (Figure 3—figure supplement 2B).

Figure 3 with 3 supplements see all
Quantification of the disruption in grid-cell activity induced by different forms of network heterogeneities in the CAN model.

Grid-cell activity of individual neurons in the network was quantified by eight different measurements, for CAN models endowed independently with intrinsic, afferent, or synaptic heterogeneities or a combination of all three heterogeneities. (AG) Depicted are percentage changes in each of average firing rate (A), peak firing rate (B), mean size (C), number (D), average spacing (E), information rate (F), and sparsity (G) for individual neurons (n = 3600) in networks endowed with distinct forms of heterogeneities, compared to the grid score of respective neurons in the homogeneous network. (H) Grid score of individual cells (n = 3600) in the network plotted as functions of integration time constant (τ), velocity modulation factor (α), and root mean square error (RMSE) between the connectivity matrices of the homogeneous and the heterogeneous CAN models. Different colors specify different degrees of heterogeneity. The three plots with reference to τ, α, and RMSE are from networks endowed with intrinsic, afferent, and synaptic heterogeneity, respectively.

Detailed quantitative analysis of grid-cell activity showed that the introduction of heterogeneities did not result in marked population-wide changes to broad measures such as average firing rate, mean size of grid fields and average grid spacing (Figure 3, Figure 3—figure supplements 12). Instead, the introduction of parametric heterogeneities resulted in a loss of spatial selectivity and disruption of patterned activity, reflected as progressive reductions in grid score, peak firing rate, information rate and sparsity (Figures 23, Figure 3—figure supplements 12). Our results showed that in networks with intrinsic or afferent heterogeneities, grid score of individual neurons was not dependent on the actual value of the associated parameter (τ or α, respectively), but was critically reliant on the degree of heterogeneity (Figure 3H). Specifically, for a given degree of intrinsic or afferent heterogeneity, the grid score value spanned similar ranges for the low or high value of the associated parameter; but grid score reduced with increased degree of heterogeneities. In addition, grid score of cells reduced with increase in local synaptic jitter, which increased with the degree of synaptic heterogeneity (Figure 3H).

Thus far, our analyses involved a CAN model with a specific size (60×60). Would our results on heterogeneity-driven disruption of grid-patterned activity extend to CAN models of other sizes? To address this, we repeated our simulations with networks of size 40×40, 50×50, 80×80 and 120×120, and introduced different degrees of all three forms of heterogeneities into the network. We found that the disruptive impact of heterogeneities on grid-patterned activity was consistent across all tested networks (Figure 3—figure supplement 3A), manifesting as a reduction in grid score with increased degree of heterogeneities (Figure 3—figure supplement 3B). Our results also showed that the disruptive impact of heterogeneities on grid-patterned activity increased with increase in network size. This size dependence specifically manifested as a complete loss of spatially selective firing patterns (Figure 3—figure supplement 3A) and the consistent drop in the grid score value to zero (i.e., %change = –100 in Figure 3—figure supplement 3B) across all neurons in the 120 × 120 network with high degree of heterogeneities. We also observed a progressive reduction in the average and the peak firing rates with increase in the degree of heterogeneities across all network sizes, although with a pronounced impact in networks with higher size (Figure 3—figure supplement 3C,D).

Together, our results demonstrated that the introduction of physiologically relevant heterogeneities into the CAN model resulted in a marked disruption of grid-patterned activity. The disruption manifested as a loss in spatial selectivity in the activity of individual cells in the network, progressively linked to the degree of heterogeneity. Although all forms of heterogeneities resulted in disruption of patterned activity, there was differential sensitivity to different heterogeneities, with heterogeneity in local network connectivity playing a dominant role in hampering spatial selectivity and patterned activity.

Incorporation of biological heterogeneities predominantly altered neural activity in low frequencies

How does the presence of heterogeneities affect activity patterns of neurons in the network as they respond to the movement of the virtual animal? A systematic way to explore patterns of neural activity is to assess the relative power of specific frequency components in neural activity. Therefore, we subjected the temporal outputs of individual neurons (across the entire period of the simulation) to spectral analysis and found neural activity to follow a typical response curve that was dominated by lower frequency components, with little power in the higher frequencies (e.g., Figure 4A, HN). This is to be expected as a consequence of the low-pass filtering inherent to the neuronal model, reflective of membrane filtering.

Incorporation of biological heterogeneities predominantly altered neural activity in low frequencies.

(A–H) Left, Magnitude spectra of temporal activity patterns of five example neurons residing in a homogeneous network (HN) or in networks with different forms and degrees of heterogeneities. Right, Normalized variance of the differences between the magnitude spectra of temporal activity of neurons in homogeneous vs. heterogeneous networks, across different forms and degrees of heterogeneities, plotted as a function of frequency.

We performed spectral analyses of temporal activity patterns of neurons from networks endowed with different forms of heterogeneities at various degrees (Figure 4). As biological heterogeneities were incorporated either by reducing or by increasing default parameter values (Table 1), there was considerable variability in how individual neurons responded to such heterogeneities (Figure 4). Specifically, when compared against the respective neuron in the homogeneous network, some neurons showed increases in activity magnitude at certain frequencies and others showed a reduction in magnitude (e.g., Figure 4A). To quantify this variability in neuronal responses, we first computed the difference between frequency-dependent activity profiles of individual neurons obtained in the presence of specific heterogeneities and of the same neuron in the homogeneous network (with identical initial conditions and subjected to identical afferent inputs). We computed activity differences for all the 3600 neurons in the network and plotted the variance of these differences as a function of frequency (e.g., Figure 4B). Strikingly, we found that the impact of introducing biological heterogeneities predominantly altered lower frequency components of neural activity, with little to no impact on higher frequencies. In addition, the variance in the deviation of neural activity from the homogenous network progressively increased with higher degrees of heterogeneities, with large deviations occurring in the lower frequencies (Figure 4).

Table 1
Forms and degrees of heterogeneities introduced in the CAN model of grid cell activity.
Degree of heterogeneityIntrinsic heterogeneity (τ)Afferent heterogeneity (α)Synaptic heterogeneity (Wij)
Lower boundUpper boundLower boundUpper boundLower boundUpper bound
181235550300
261425650600
341615750900
421858501200
5120010001500

Together, these observations provided an important insight that the presence of physiologically relevant network heterogeneities predominantly affected lower frequency components of neural activity, with higher degrees of heterogeneity yielding larger variance in the deviation of low-frequency activity from the homogeneous network.

Introducing intrinsic resonance in rate-based neurons: phenomenological model

Several cortical and hippocampal neuronal structures exhibit intrinsic resonance, which allows these structures to maximally respond to a specific input frequency, with neural response falling on either side of this resonance frequency (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2007; Narayanan and Johnston, 2008; Das et al., 2017). More specifically, excitatory (Erchova et al., 2004; Giocomo et al., 2007; Nolan et al., 2007; Garden et al., 2008; Pastoll et al., 2012) and inhibitory neurons (Boehlen et al., 2016) in the superficial layers of the medial entorhinal cortex manifest resonance in the theta frequency range (4–10 Hz). Therefore, the model of individual neurons in the CAN model as integrators of afferent activity is inconsistent with the resonating structure intrinsic to their physiological counterparts. Intrinsic resonance in neurons is mediated by the expression of slow restorative ion channels, such as the HCN or the M-type potassium, which mediate resonating conductances that suppress low-frequency inputs by virtue of their kinetics and voltage-dependent properties (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008; Hu et al., 2009). As the incorporation of biological heterogeneities predominantly altered low-frequency neural activity (Figure 4), we hypothesized that the expression of intrinsic neuronal resonance (especially the associated suppression of low-frequency components) could counteract the disruptive impact of biological heterogeneities on network activity, thereby stabilizing grid-like activity patterns.

An essential requirement in testing this hypothesis was to introduce intrinsic resonance in the rate-based neuronal models in our network. To do this, we noted that the integrative properties of the integrator model neuron are mediated by the low-pass filtering kinetics associated with the parameter τ. We confirmed the low-pass filter (LPF) characteristics of the integrator neurons by recording the response of individual neurons to a chirp stimulus (Figure 5A,B). As this provides an equivalent to the LPF associated with the membrane time constant, we needed a HPF to mimic resonating conductances that suppress low-frequency activity in biological neurons. A simple approach to suppress low-frequency components is to introduce a differentiator, which we confirmed by passing a chirp stimulus through a differentiator (Figure 5A,C). We therefore passed the outcome of the integrator (endowed with LPF characteristics) through a first-order differentiator (HPF), with the postulate that the net transfer function would manifest resonance. We tested this by first subjecting a chirp stimulus to the integrator neuron dynamics and feeding that output to the differentiator. We found that the net output expressed resonance, acting as a band-pass filter (Figure 5A,C).

Incorporation of an additional high-pass filter into neuronal dynamics introduces resonance in individual rate-based neurons.

(A) Responses of neurons with low-pass (integrator; blue), high-pass (black) and band-pass (resonator; red) filtering structures to a chirp stimulus (top). Equations (7–8) were employed for computing these responses. (B) Response magnitude of an integrator neuron (low-pass filter) as a function of input frequency, derived from response to the chirp stimulus. (C) Response magnitude of a resonator neuron (band-pass filter; red) as a function of input frequency, derived from response to the chirp stimulus, shown to emerge as a combination of low- (blue) and high-pass (black) filters. fR represents resonance frequency. The response magnitudes in (B, C) were derived from respective color-coded traces shown in (A). (DE) Tuning resonance frequency by altering the low-pass filter characteristics. Response magnitudes of three different resonating neurons with identical HPF exponent (ε = 0.3), but with different integrator time constants (τ), plotted as functions of frequency (D). Resonance frequency can be tuned by adjusting τ for different fixed values of ε, with an increase in  τ yielding a reduction in fR (E). (FG) Tuning resonance frequency by altering the high-pass filter characteristics. Response magnitudes of three different resonating neurons with identical τ (=10 ms), but with different values for ε, plotted as functions of frequency (F). Resonance frequency can be tuned by adjusting ε for different fixed values of τ, with an increase in ε yielding an increase in fR (G).

Physiologically, tuning of intrinsic resonance to specific frequencies could be achieved by altering the characteristics of the HPF or the LPF (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008; Das et al., 2017; Mittal and Narayanan, 2018; Rathour and Narayanan, 2019). Matching this physiological tuning procedure, we tuned resonance frequency (fR) in our rate-based resonator model either by changing τ that governs the LPF (Figure 5D,E) or by altering an exponent ε (Equation 9) that regulated the slope of the HPF on the frequency axis (Figure 5F–G). We found fR to decrease with an increase in τ (Figure 5E) or a reduction in ε (Figure 5G). In summary, we mimicked the biophysical mechanisms governing resonance in neural structures to develop a phenomenological methodology to introduce and tune intrinsic resonance, yielding a tunable band-pass filtering structure in rate-based model neurons. In this model, as resonance was introduced through a HPF, a formulation that does not follow the mechanistic basis of resonance in physiological systems, we refer this as a phenomenological model for resonating neurons.

Homogeneous CAN models constructed with phenomenological resonator neurons exhibited stable grid-patterned neural activity

How does the expression of intrinsic resonance in individual neurons of CAN models alter their grid-patterned neural activity? How do grid patterns respond to changes in resonance frequency, realized either by altering τ or ε? To address these, we replaced all neurons in the homogeneous CAN model with theta-frequency resonators and presented the network with the same virtual trajectory (Figure 2E). We found that CAN models with resonators were able to reliably and robustly produce grid-patterned neural activity, which were qualitatively (Figure 6A,B) and quantitatively (Figure 6C–F, Figure 6—figure supplement 1) similar to patterns produced by networks of integrator neurons across different values of τ. Importantly, increase in τ markedly increased spacing between grid fields (Figure 6D) and their average size (Figure 6E), consequently reducing the number of grid fields within the arena (Figure 6F). It should be noted that the reduction in grid score with increased τ (Figure 6C) is merely a reflection of the reduction in the number of grid fields within the arena, and not indicative of loss of grid-patterned activity. Although the average firing rates were tuned to be similar across the resonator and the integrator networks (Figure 6—figure supplement 1A), the peak-firing rate in the resonator network was significantly higher (Figure 6—figure supplement 1B). In addition, for both resonator and integrator networks, consistent with increases in grid-field size and spacing, there were marked increases in information rate (Figure 6—figure supplement 1C) and reductions in sparsity (Figure 6—figure supplement 1D), with increase in τ.

Figure 6 with 1 supplement see all
Impact of neuronal resonance, introduced by altering low-pass filter characteristics, on grid-cell activity in a homogeneous CAN model.

(A) Example rate maps of grid-cell activity from a homogeneous CAN model with integrator neurons modeled with different values for integration time constants (τ). (B) Example rate maps of grid-cell activity from a homogeneous CAN model with resonator neurons modeled with different τ values. (C–F) Grid score (C), average spacing (D), mean size (E), and number (F) of grid fields in the arena for all neurons (n = 3600) in homogeneous CAN models with integrator (blue) or resonator (red) neurons, modeled with different τ values. The HPF exponent ε was set to 0.3 for all resonator neuronal models.

Within our model framework, it is important to emphasize that although increasing τ reduced fR in resonator neurons (Figures 5E and 6B), the changes in grid spacing and size are not a result of change in fR, but a consequence of altered τ. This inference follows from the observation that altering τ has qualitatively and quantitatively similar outcomes on grid spacing and size in both integrator and resonator networks (Figure 6). Further confirmation for the absence of a direct role for fR in regulating grid spacing or size (within our modeling framework) came from the invariance of grid spacing and size to change in ε, which altered fR (Figure 7). Specifically, when we fixed τ, and altered ε across resonator neurons in the homogeneous CAN model, fR of individual neurons changed (Figure 7A), but did not markedly change grid field patterns (Figure 7A), grid score, average grid spacing, mean size, number, or sparsity of grid fields (Figure 7B). However, increasing ε decreased grid-field sizes, the average and the peak firing rates, consequently reducing the information rate in the activity pattern (Figure 7B).

Impact of neuronal resonance, introduced by altering high-pass filter characteristics, on grid-cell activity in a homogeneous CAN model.

(A) Example rate maps of grid-cell activity from a homogeneous CAN model with integrator neurons (Column 1) or resonator neurons (Columns 2–6) modeled with different values of the HPF exponent (ε). (B) Comparing 8 metrics of grid-cell activity for all the neurons (n = 3600) in CAN models with integrator (blue) or resonator (green) neurons. CAN models with resonator neurons were simulated for different ε values. τ = 10 ms for all networks depicted in this figure.

Together, these results demonstrated that homogeneous CAN models with phenomenological resonators reliably and robustly produce grid-patterned neural activity. In these models, the LPF regulated the size and spacing of the grid fields and the HPF governed the magnitude of activity and the suppression of low-frequency inputs.

Phenomenological resonators stabilized the emergence of grid-patterned activity in heterogeneous CAN models

Having incorporated intrinsic resonance into the CAN model, we were now equipped to directly test our hypothesis that the expression of intrinsic neuronal resonance could stabilize grid-like activity patterns in heterogeneous networks. We incorporated heterogeneities, retaining the same forms/degrees of heterogeneities (Table 1; Figure 2A–D) and the same virtual trajectory (Figure 2E), in a CAN model endowed with resonators to obtain the spatial activity maps for individual neurons (Figure 8A). Strikingly, we observed that the presence of resonating neurons in the CAN model stabilized grid-patterned activity in individual neurons, despite the introduction of the highest degrees of all forms of biological heterogeneities (Figure 8A). Importantly, outputs produced by the homogeneous CAN model manifested spatially precise circular grid fields, exhibiting regular triangular firing patterns across trials (e.g., Figure 1A), which constituted forms of precision that are not observed in electrophysiologically obtained grid-field patterns (Fyhn et al., 2004; Hafting et al., 2005). However, that with the incorporation of biological heterogeneities in resonator neuron networks, the imprecise shapes and stable patterns of grid-patterned firing (e.g., Figure 8A) tend closer to those of biologically observed grid cells. Quantitatively, we computed grid-cell measurements across all the 3600 neurons in the network and found that all quantitative measurements (Figure 8B–I, Figure 8—figure supplement 1), including the grid score (Figure 8B) and information rate (Figure 8H), were remarkably robust to the introduction of heterogeneities (cf. Figures 23 for CAN models with integrator neurons).

Figure 8 with 2 supplements see all
Neuronal resonance stabilizes grid-cell activity in heterogeneous CAN models.

(A) Example rate maps of grid-cell activity in homogeneous (Top left) and heterogeneous CAN models, endowed with resonating neurons, across different degrees of heterogeneities. (BI) Percentage changes in grid score (B), average spacing (C), peak firing rate (D), average firing rate (E), number (F), mean size (G), information rate (H), and sparsity (I) of grid field for all neurons (n = 3600) in the heterogeneous CAN model, plotted for 5 degrees of heterogeneities (D1–D5), compared with respective neurons in the homogeneous resonator network. All three forms of heterogeneities were incorporated together into the network. ε = 0.3 and τ = 10 ms for all networks depicted in this figure.

To ensure that the robust emergence of stable grid-patterned activity in heterogeneous CAN models with resonator neurons was not an artifact of the specific network under consideration, we repeated our analyses with two additional sets of integrator-resonator comparisons (Figure 8—figure supplement 2). In these networks, we employed a baseline τ of either 14 ms (Figure 8—figure supplement 2A) or 8 ms (Figure 8—figure supplement 2B) instead of the 10 ms value employed in the default network (Figures 2 and 8). A pairwise comparison of all grid-cell measurements between networks with integrator vs. resonator neurons demonstrated that the resonator networks were resilient to the introduction of heterogeneities compared to integrator networks (Figure 8—figure supplement 2).

Together, these results demonstrated that phenomenologically incorporating intrinsic resonance into neurons of the CAN model imparts resilience to the network, facilitating the robust emergence of stable grid-cell activity patterns, despite the expression of biological heterogeneities in neuronal and synaptic properties.

Mechanistic model of neuronal intrinsic resonance: incorporating a slow activity-dependent negative feedback loop

In the previous sections, we had phenomenologically introduced intrinsic resonance in rate-based neurons, by incorporating an artificial high-pass filter to the low-pass filtering property of an integrator neuron. Physiologically, however, intrinsic resonance results from the presence of resonating conductances in the neural system, which introduce a slow activity-dependent negative feedback into the neuron. To elaborate, there are two requirements for any conductance to act as a resonating conductance (Hutcheon and Yarom, 2000). First, the current mediated by the conductance should actively suppress any changes in membrane voltage of neurons. In resonating conductances, this is implemented by the voltage-dependent gating properties of the ion channel mediating the resonating conductance whereby any change in membrane voltage (de)activates the channel in such a way that the channel current suppresses the change in membrane voltage. This constitutes an activity-dependent negative feedback loop. The second requirement for a resonating conductance is that it has to be slower compared to the membrane time constant. This requirement ensures that cutoff frequency of the lowpass filter (associated with the membrane time constant) is greater than that of the high-pass filter (mediated by the resonating conductance). Together these two requirements mechanistically translate to a slow activity-dependent negative feedback loop, a standard network motif in biological systems for yielding damped oscillations and resonance (Alon, 2019). In intrinsically resonating neurons, resonance is achieved because the resonating conductance actively suppresses low-frequency signals through this slow negative feedback loop, in conjunction with the suppression of higher frequency signals by the low-pass filter mediated by the membrane time constant. Resonating conductances do not suppress high frequencies because they are slow and would not activate/deactivate with faster high-frequency signals (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008).

In the single-neuron integrator model employed in this study, the LPF is implemented by the integration time constant τ. Inspired by the mechanisms behind the physiological emergence of neuronal intrinsic resonance, we mechanistically incorporated intrinsic resonance into this integrator neuronal model by introducing a slow negative feedback to the single neuron dynamics (Figure 9A). The dynamics of the coupled evolution of neuronal activity (S) and the feedback state variable (m) are provided in Equations (11–13). A sigmoidal feedback kernel (m) regulated the activity dependence, and the feedback time constant (τm) determined the kinetics of the negative feedback loop (Equations 11 and 12). The response of this single-neuronal model, endowed with the slow-negative feedback loop, to a pulse input manifested typical sag associated with the expression of intrinsic resonance (Hutcheon and Yarom, 2000), owing to the slow evolution of the feedback state variable m (Figure 9B,C). To directly test whether the introduction of the slow negative feedback loop yielded a resonating neuronal model, we assessed single-neuron output to a chirp stimulus (Figure 9D) and found the manifestation of intrinsic resonance when the feedback loop was activated (Figure 9E). As this resonator neuron model involving a slow negative feedback loop was derived from the mechanistic origins of intrinsic neuronal resonance, we refer to this as the mechanistic resonator model.

Incorporation of a slow negative feedback loop into single-neuron dynamics introduces tunable resonance in rate-based neuronal models.

(A) A mechanistic model of intrinsic resonance in individual neurons using a slow negative feedback loop. (B) Temporal evolution of the output (S) of an individual neuron and the state variable related to the negative feedback (m) in response for square pulse. (C) Phase-plane representation of the dynamics depicted in (B). (D) Responses of neurons with low-pass (integrator; blue) and band-pass (resonator; red) filtering structures to a chirp stimulus (top). The resonator was implemented through the introduction of a slow negative feedback loop (A). Equations (10–12) were employed for computing these responses. (E) Response magnitude of an integrator neuron (low-pass filter, blue) and resonator neuron (band-pass filter, red) as functions of input frequency, derived from their respective responses to the chirp stimulus. fR represents resonance frequency. The response magnitudes in (E) was derived from respective color-coded traces shown in (D). (F–I) Tuning resonance frequency by altering the parameters of the slow negative feedback loop. Resonance frequency, obtained from an individual resonator neuron responding to a chirp stimulus, is plotted as functions of half maximal activity of the feedback kernel, S1/2 (F), slope of the feedback kernel, k (G), strength of negative feedback, g (H), and feedback time constant, τm (I). The insets in (F) and (G) depict the impact of altering S1/2 and k on the feedback kernel (m), respectively.

Resonance frequency in the mechanistic resonator model was tunable by altering the parameters associated with the feedback loop. Specifically, increasing the half-maximal activity of the feedback kernel (S1/2) reduced resonance frequency within the tested range of values (Figure 9F), whereas increasing the slope of the feedback kernel (k) enhanced fR initially, but reduced fR with further increases in k (Figure 9G). Furthermore, enhancing the strength of the feedback (g) increased fR, and is reminiscent of increased fR with increase in resonating conductance density (Narayanan and Johnston, 2008), the parameter that defines the strength of the negative feedback loop in intrinsic resonating neurons. An important requirement for the expression of resonance through the mechanistic route that we chose is that the feedback time constant (τm) has to be slower than the integration time constant (τ = 10 ms). To test this in our model, we altered τm of the feedback loop while maintaining all other values to be constant (Figure 9I) and found that resonance did not manifest with low values of τm. With increase in τm, there was an initial rapid increase in fR, which then reduced upon further increase in τm (Figure 9I). It may be noted that low values of τm translate to a fast negative feedback loop, thus emphasizing the need for a slow negative feedback loop for the expression of resonance. These observations are analogous to the dependence of resonance frequency in intrinsically resonating neurons on the activation time constant of the resonating conductance (Narayanan and Johnston, 2008). Together, we have proposed a tunable mechanistic model for intrinsic resonance in rate-based neurons through the incorporation of a slow negative feedback loop to the neuronal dynamics.

Mechanistic resonators stabilized the emergence of grid-patterned activity in heterogeneous CAN models

How was grid-patterned neural activity in a homogeneous CAN model affected by the expression of intrinsic neuronal resonance through a mechanistic framework? To address this, we replaced all the integrator neurons in the homogeneous CAN model (Figure 2E; homogeneous network) with mechanistic theta-frequency resonator neurons, while maintaining identical connectivity patterns and velocity dependent inputs. We observed that neurons within this homogeneous CAN model built with mechanistic resonators were able to produce distinct grid-patterned neural activity (Figure 10A). Sensitivity analyses showed that grid-field size and spacing were affected by the parameters associated with the slow negative feedback loop, consequently affecting the grid score (Figure 10B) and other measurements associated with the grid-patterned activity (Figure 10—figure supplements 12). Within the tested ranges, the strength of feedback (g) had limited effect on grid-patterned neural activity, whereas the slope (k) and half-maximal acivity (S1/2) of the feedback kernel had strong impact on grid-field size and spacing (Figure 10). Although the grid fields were well-defined across all values of k and S1/2, the grid scores were lower for some cases (e.g., k=0.5; S1/2=0.1 in Figure 10B) due to the smaller number of grid fields within the arena (Figure 10A). Importantly, the feedback time constant (τm) had very little impact on grid pattern neural activity (Figure 10A) and on all quantified measurements (Figure 10B, Figure 10—figure supplements 12). Together, these analyses demonstrated that a homogenous CAN model built with mechanistic resonators yielded stable and robust grid pattern neural activity whose grid field size and spacing can be tuned by parameters of slow negative feedback that governed resonance.

Figure 10 with 2 supplements see all
Impact of neuronal resonance, introduced by a slow negative feedback loop, on grid-cell activity in a homogeneous CAN model.

(A) Example rate maps of grid-cell activity from a homogeneous CAN model for different values of the feedback strength (g) slope of the feedback kernel (k), feedback time constant (τm), and half maximal activity of the feedback kernel, S1/2. (B) Grid scores for all the neurons in the homogeneous CAN model for different values of gk, τm, and S1/2 in resonator neurons (green). Grid scores for homogeneous CAN models with integrator neurons (without the negative feedback loop) are also shown (blue). Note that although pattern neural activity is observed across all networks, the grid score is lower in some cases because of the large size and lower numbers of grid fields within the arena with those parametric configurations.

Next, we systematically incorporated the different degrees of heterogeneities into the CAN model with mechanistic resonator neurons to test if resonance could stabilize grid-patterned activity in the heterogeneous CAN models. We employed 5 degrees of all heterogeneities with the same virtual trajectory (Figure 2E) in a CAN model endowed with mechanistic resonators to compute the spatial activity maps for individual neurons (Figure 11A). As with the network built with phenomenological resonators (Figure 8), we observed stable and robust grid-like spatial maps emerge even with highest degree of heterogeneities in the CAN model with mechanistic resonator neurons (Figure 11). Furthermore, compared to homogeneous model, the grid pattern activity obtained by the heterogeneous CAN model closely mimicked the grid-cell activity, endowed with imprecise grid field shapes observed in neurons in MEC region of the brain (Figure 11). Quantitatively, the grid score (Figure 11B), average spacing (Figure 11C), and mean grid field size (Figure 11G) were remarkably robust to the introduction of heterogeneities. However, the average (Figure 11E) and peak firing (Figure 11D) reduced with increase in degree of heterogeneities (Figure 11D) and consequently affected measurements that were dependent on firing rate (i.e., information rate and sparsity; Figure 11H–I). Together, these results demonstrate that intrinsic neuronal resonance, introduced either through phenomenological (Figure 8) or through mechanistic (Figure 11) single-neuron models, yielded stable and robust grid-like pattern in heterogeneous CAN models.

Figure 11 with 1 supplement see all
Resonating neurons, achieved through a slow negative feedback loop, stabilizes grid-cell activity in heterogeneous CAN models.

(A) Example rate maps of grid-cell activity in homogeneous (top left) and heterogeneous CAN models, endowed with resonating neurons, across different degrees of heterogeneities. (B–I) Percentage changes in grid score (B), average spacing (C), peak firing rate (D), average firing rate (E), number (F), mean size (G), information rate (H), and sparsity (I) of grid field for all neurons (n = 3600) in the heterogeneous CAN model, plotted for 5 degrees of heterogeneities (D1–D5). The percentage changes are computed with reference to respective neurons in the homogeneous resonator network. All three forms of heterogeneities were incorporated together into these networks.

The grid-cell activity on a 2D plane represents a spatially repetitive periodic pattern. Consequently, a phase plane constructed from activity along the diagonal of the 2D plane on one axis (Figure 11—figure supplement 1A), and the spatial derivative of this activity on the other should provide a visual representation of the impact of heterogeneities and resonance on this periodic pattern of activity. To visualize the specific changes in the periodic orbits obtained with homogeneous and heterogeneous networks constructed with integrators or resonators, we plotted the diagonal activity profiles of five randomly picked neurons using the phase plane representation (Figure 11—figure supplement 1B). This visual representation confirmed that homogenous CAN models built of integrators or resonators yielded stable closed orbit trajectories, representing similarly robust grid-patterned periodic activity (Figure 11—figure supplement 1B). The disparate sizes of different orbits are merely a reflection of the intersection of the diagonal with a given grid field. Specifically, if the grid fields were considered to be ideal circles, the orbital size is the largest if diagonal passes through the diameter, with orbital size reducing for any other chord. Upon introduction of heterogeneities, the phase-plane plots of diagonal activity profiles from the heterogeneous integrator network lacked closed orbits (Figure 11—figure supplement 1B). This is consistent with drastic reductions in grid score and information rate for these heterogeneous network models (Figures 23). In striking contrast, the phase-plane plots from the heterogeneous resonator network manifested closed orbits even with the highest degree of heterogeneities introduced, irrespective of whether resonance was introduced through a phenomenological or a mechanistic model (Figure 11—figure supplement 1B). Although these phase-plane trajectories were noisy compared to those from the homogeneous resonator network, the presence of closed orbits indicated the manifestation of spatial periodicity in these activity patterns (Figure 11—figure supplement 1B). These observations visually demonstrated that resonator neurons stabilized the heterogeneous network and maintained spatial periodicity in grid-cell activity, irrespective of whether resonance was introduced through a phenomenological or a mechanistic model.

The slow kinetics of the negative feedback loop in mechanistic resonators is a critical requirement for stabilizing heterogeneous CAN models

Our rationale behind the introduction of intrinsic resonance into the neuron was that it would suppress the low-frequency perturbations introduced by biological heterogeneities (Figure 4). Consequently, our hypothesis is that the targeted suppression of low-frequency components resulted in the stabilization of the CAN model. Two lines of evidence for this hypothesis were that the suppression of low-frequency components through phenomenological (Figure 8) or through mechanistic (Figure 11) resonators resulted in stabilization of the heterogeneous CAN models. An advantage of recruiting a mechanistic model for introducing resonance is that sensitivity analyses on its parameters could provide valuable mechanistic insights about how such stabilization is achieved. With reference to specific hypothesis, the ability to tune resonance by altering the feedback time constant τm (Figure 9I), without altering the feedback kernel or the feedback strength, provides an efficient route to understand the mechanistic origins of the stabilization. Specifically, the value of τm (default value = 75 ms) governs the slow kinetics of the feedback loop and is the source for the targeted suppression of low-frequency components. Reducing τm would imply a faster negative feedback loop, thereby suppressing even higher-frequency components. As a further test of our hypothesis on the role of suppressing low-frequency components in stabilizing the CAN network, we asked if mechanistic resonators with lower values of τm would be able to stabilize heterogeneous CAN models (Figure 12). If fast negative feedback loops (i.e., low values of τm) were sufficient to stabilize heterogeneous CAN models, that would counter our hypothesis on the requirement of targeted suppression of low-frequency components. To the contrary, we found that heterogeneous CAN networks with neurons endowed with fast negative feedback loops were incapable of stabilizing the grid-cell network (Figure 12). With progressive increase in τm, the grid-patterned firing stabilized even for high degrees of heterogeneities (Figure 12B,C for low values of τm; Figure 11B for τm = 75 ms), thus providing a direct additional line of evidence supporting our hypothesis on the need for targeted suppression of low-frequency components in stabilizing the network. We noted that the impact of altering τm was specifically related to stabilizing heterogeneous networks by suppressing heterogeneities-driven low-frequency perturbations because altering τm did not alter grid-patterned activity in homogeneous resonator networks (Figure 9A,B). Together, our results provide multiple lines of evidence that the slow kinetics of the negative feedback loop in single-neuron dynamics (Figure 9A) mediates targeted suppression of low-frequency signals (Figure 9D,E), thereby yielding intrinsic neuronal resonance (Figure 9I) and stabilizing grid-patterned activity in heterogeneous CAN models (Figures 11 and 12).

The slow kinetics of the negative feedback loop is a critical requirement for stabilizing heterogeneous CAN models.

(A) A mechanistic model of intrinsic resonance in individual neurons using a slow negative feedback loop, with the feedback time constant (τm) defining the slow kinetics. (B) Example rate maps of grid-cell activity in homogeneous (top left) and heterogeneous CAN models, endowed with neurons built with different values of τm, across different degrees of heterogeneities. (C) Percentage changes in grid score for all neurons (n = 3600) in the heterogeneous CAN model, endowed with neurons built with different values of τm, plotted for 5 degrees of heterogeneities (D1–D5). The percentage changes are computed with reference to respective neurons in the homogeneous resonator network. All three forms of heterogeneities were incorporated together into these networks.

Resonating neurons suppressed the impact of biological heterogeneities on low-frequency components of neural activity

As detailed above, our hypothesis on a potential role for intrinsic neuronal resonance in stabilizing grid-patterned firing in heterogeneous CAN models was centered on the ability of resonators in specifically suppressing low-frequency components. Although our analyses provided evidence for stabilization of grid-patterned firing with phenomenological (Figure 8) or mechanistic (Figure 11) resonators, did resonance specifically suppress the impact of biological heterogeneities of low-frequency components of neural activity? To assess this, we performed frequency-dependent analyses of neural activity in CAN models with phenomenological or mechanistic resonators (Figure 13, Figure 13—figure supplements 13) and compared them with integrator-based CAN models (Figure 4). First, we found that the variance in the deviations of neural activity in heterogeneous networks with reference to the respective homogeneous models were considerably lower with resonator neurons (both phenomenological and mechanistic models), compared to their integrator counterparts (Figure 13A,B, Figure 13D,E, Figure 13—figure supplements 13). Second, comparing the relative power of neural activity across different octaves, we found that network with resonators suppressed lower frequencies (predominantly the 0–2 Hz band) and enhanced power in the range of neuronal resonance frequencies, when compared with their integrator counterparts (Figure 13C, Figure 13F, Figure 13—figure supplements 13). This relative suppression of low-frequency power accompanied by the relative enhancement of high-frequency power was observed across all networks with resonator, either homogeneous (Figure 13—figure supplements 12) or heterogeneous with distinct forms of heterogeneities (Figure 13—figure supplement 3G–I). Importantly, given the slow activity-dependent negative feedback loop involved in mechanistic resonators, the low-frequency suppression was found to be extremely effective across all degrees of heterogeneities (Figure 13D,E) with minimal increases of power in high-frequency bands (Figure 13F) compared to their phenomenological counterparts (Figure 13A–C). The phenomenological resonators were built with a simple HPF that is not endowed with activity-dependent filtering capabilities. In addition, the derivative-based implementation of the phenomenological resonator model yielded spurious high-frequency power, which was averted with the slow activity-dependent negative feedback loop incorporated into the mechanistic resonator model (Figure 13D–F).

Figure 13 with 3 supplements see all
Intrinsically resonating neurons suppressed heterogeneity-induced variability in low-frequency perturbations caused by the incorporation of biological heterogeneities.

(A) Normalized variance of the differences between the magnitude spectra of temporal activity in neurons of homogeneous vs. heterogeneous networks, across different degrees of all three forms of heterogeneities expressed together, plotted as a function of frequency. (B) Area under the curve (AUC) of the normalized variance plots shown in Figure 4H (for the integrator network) and (A) (for the phenomenological resonator network) showing the variance to be lower in resonator networks compared to integrator networks. The inset shows the total AUC across all frequencies for the integrator vs. the resonator networks as a function of the degree of heterogeneities. (C) Difference between the normalized magnitude spectra of neural temporal activity patterns for integrator and resonator neurons in CAN models. Solid lines depict the mean and shaded area depicts the standard deviations, across all 3600 neurons. The resonator networks in (AC) were built with phenomenological resonators. (D–F) same as (A–C) but for the mechanistic model of intrinsic resonance. All heterogeneities were simultaneously expressed for all the analyses presented in this figure.

Together, our results demonstrated that biological heterogeneities predominantly altered low-frequency components of neural activity, and provided strong quantitative lines of evidence that intrinsic neuronal resonance plays a stabilizing role in heterogeneous networks by targeted suppression of low-frequency inputs, thereby counteracting the disruptive impact of biological heterogeneities on low-frequency components.

Discussion

The principal conclusion of this study is that intrinsic neuronal resonance stabilizes heterogeneous 2D CAN models, by suppressing heterogeneity-induced perturbations in low-frequency components of neural activity. Our analyses provided the following lines of evidence in support of this conclusion:

  1. Neural-circuit heterogeneities destabilized grid-patterned activity generation in a 2D CAN model (Figures 2 and 3).

  2. Neural-circuit heterogeneities predominantly introduced perturbations in the low-frequency components of neural activity (Figure 4).

  3. Targeted suppression of low-frequency components through phenomenological (Figure 5C) or through mechanistic (Figure 9D) resonators resulted in stabilization of the heterogeneous CAN models (Figures 8 and 11). Thus, stabilization was achieved irrespective of the means employed to suppress low-frequency components: an activity-independent suppression of low frequencies (Figure 5) or an activity-dependent slow negative feedback loop (Figure 9).

  4. Changing the feedback time constant τm in mechanistic resonators, without changes to neural gain or the feedback strength allowed us to control the specific range of frequencies that would be suppressed. Our analyses showed that a slow negative feedback loop, which results in targeted suppression of low-frequency components, was essential in stabilizing grid-patterned activity (Figure 12). As the slow negative feedback loop and the resultant suppression of low frequencies mediates intrinsic neuronal resonance, these analyses provide important lines of evidence for the role of targeted suppression of low frequencies in stabilizing grid-patterned activity.

  5. We demonstrate that the incorporation of phenomenological (Figure 13A–C) or mechanistic (Figure 13D–F) resonators specifically suppressed lower frequencies of activity in the 2D CAN model.

A physiological role for intrinsic neuronal resonance in stabilizing heterogeneous neural networks

Intrinsic neuronal resonance is effectuated by the expression of resonating conductances, which are mediated by restorative channels endowed with activation time constants slower than the neuronal membrane time constant (Hutcheon and Yarom, 2000). The gating properties and the kinetics of these ion channels allow them to suppress low-frequency activity with little to no impact on higher frequencies, yielding resonance in conjunction with the LPF governed by the membrane time constant (Hutcheon and Yarom, 2000). This intrinsic form of neuronal resonance, mediated by ion channels that are intrinsic to the neural structure, is dependent on membrane voltage (Narayanan and Johnston, 2007; Narayanan and Johnston, 2008; Hu et al., 2009), somato-dendritic recording location (Narayanan and Johnston, 2007; Narayanan and Johnston, 2008; Hu et al., 2009), neuronal location along the dorso-ventral axis (Giocomo et al., 2007; Garden et al., 2008) and is regulated by activity-dependent plasticity (Brager and Johnston, 2007; Narayanan and Johnston, 2007). Resonating conductances have been implicated in providing frequency selectivity to specific range of frequencies under sub- and supra-threshold conditions (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2007; Hu et al., 2009; Das and Narayanan, 2017; Das et al., 2017), in mediating membrane potential oscillations (Fransén et al., 2004; Mittal and Narayanan, 2018), in mediating coincidence detection through alteration to the class of neural excitability (Das and Narayanan, 2017; Das et al., 2017), in regulating spatio-temporal summation of synaptic inputs (Garden et al., 2008), in introducing phase leads in specific ranges of frequencies regulating temporal relationships of neural responses to oscillatory inputs (Narayanan and Johnston, 2008), in regulating local-field potentials and associated spike phases (Sinha and Narayanan, 2015; Ness et al., 2018), in mediating activity homeostasis through regulation of neural excitability (Brager and Johnston, 2007; Narayanan and Johnston, 2007; Honnuraiah and Narayanan, 2013), in altering synaptic plasticity profiles (Nolan et al., 2007; Narayanan and Johnston, 2010), and in regulating grid-cell scales (Giocomo et al., 2007; Giocomo et al., 2011b; Giocomo et al., 2011a; Pastoll et al., 2012).

Our analyses provide an important additional role for intrinsic resonance in stabilizing heterogeneous neural network, through the suppression of heterogeneity-induced perturbation in low-frequency components. The 1/f characteristics associated with neural activity implies that the power in lower frequencies is higher (Buzsaki, 2006), and our analyses show that the incorporation of biological heterogeneities into networks disrupt their functional outcomes by introducing perturbations predominantly in the lower frequencies. We demonstrate that resonating conductances, through their ability to suppress low-frequency activity, are ideally placed to suppress such low-frequency perturbations thereby stabilizing activity patterns in the face of biological heterogeneities. As biological heterogeneities are ubiquitous, we postulate intrinsic neuronal resonance as a powerful mechanism that lends stability across different heterogeneous networks, through suppression of low-frequency perturbations introduced by the heterogeneities. A corollary to this postulate is that the specific resonance frequency of a given neural structure is a reflection of the need to suppress frequencies where the perturbations are introduced by specific forms and degree of biological heterogeneities expressed in the network where the structure resides.

Within this framework, the dorso-ventral gradient in resonance frequencies in entorhinal stellate cells could be a reflection of the impact of specific forms of heterogeneities expressed along the dorso-ventral axis on specific frequency ranges, with resonance frequency being higher in regions where a larger suppression of low-frequency components is essential. Future electrophysiological and computational studies could specifically test this hypothesis by quantifying the different heterogeneities in these sub-regions, assessing their frequency-dependent impact on neural activity in individual neurons and their networks. More broadly, future studies could directly measure the impact of perturbing resonating conductances on network stability and low-frequency activity in different brain regions to test our predictions on the relationship among biological heterogeneities, intrinsic resonance, and network stability.

Slow negative feedback: stability, noise suppression, and robustness

Our analyses demonstrated the efficacy of a slow negative feedback loop in stabilizing grid-patterned activity in CAN models (Figures 1112). From a broader perspective, negative feedback is a standard motif for establishing stabilization and robustness of dynamical systems spanning control engineering (Nyquist, 1932; Black, 1934; Bode, 1945; Bechhoefer, 2005; Åström and Murray, 2008) and biological networks (Barkai and Leibler, 1997; Weng et al., 1999; Hutcheon and Yarom, 2000; Bhalla et al., 2002; Milo et al., 2002; Shen-Orr et al., 2002; Tyson et al., 2003; Alon, 2007; Turrigiano, 2007; Novák and Tyson, 2008; Tyson and Novák, 2010). From the perspective of biological systems, the impact of negative feedback on stability, robustness, and homeostasis spans multiple scales, from single molecules dynamics to effective functioning of entire ecosystem. Examples of the stabilizing roles of negative feedback at the organ level include baroreflex in blood pressure regulation (Sved, 2009), body temperature, blood glucose level, endocrine hormone secretion (Modell et al., 2015), and erythropoiesis (Koulnis et al., 2014). Cells are equipped with distinct receptors that can sense changes in temperature, pH, damage to cellular workhorse proteins or DNA, internal state of cells and accumulation of products (Tyson and Novák, 2010). Using functional motifs consisting combinations of positive and negative feedback loops often imparts stability to the biochemical reactions and signaling networks comprising these receptors, thus maintaining homeostasis (Alon, 2007). Specific examples for negative feedback in biochemical reactions and signaling networks include protein synthesis (Goodwin, 1966), mitotic oscillators (Goldbeter, 1991), MAPK signaling pathways (Kholodenko, 2000; Bhalla et al., 2002), cAMP production (Martiel and Goldbeter, 1987), adaptation in bacterial chemotaxis (Knox et al., 1986; Barkai and Leibler, 1997; Spiro et al., 1997; Yi et al., 2000; Levchenko and Iglesias, 2002), and M/G1 module (G1/S and G2/M phases) of the cell cycle control system (Rupes, 2002).

Importantly, with specific relevance to our hypothesis, negative feedback has been shown to reduce the effects of noise and enhance system stability with reference to internal and external perturbations because of the suppressive nature of this motif (Savageau, 1974; Becskei and Serrano, 2000; Thattai and van Oudenaarden, 2001; Austin et al., 2006; Dublanche et al., 2006; Raj and van Oudenaarden, 2008; Lestas et al., 2010; Cheong et al., 2011; Voliotis et al., 2014). In addition, negative feedback alleviates bottlenecks on information transfer (Cheong et al., 2011), and has been proposed as a mechanism for stable alignment of dose-response in the signaling system for mating pheromones in yeast, by adjusting the dose-response of downstream systems to match the dose-response of upper-level systems and simultaneously reducing the amplification of stochastic noise in the system (Yu et al., 2008).

At the cellular scale, the dynamics of action potential, which is a fundamental unit of information transfer in nervous system, is critically dependent on the negative feedback by delayed-rectifier potassium channels (Hodgkin and Huxley, 1952c; Hodgkin and Huxley, 1952b; Hodgkin and Huxley, 1952a). Resonating conductances (or phenomenological inductances) mediate slow negative feedback in sustaining sub-threshold membrane potential oscillations along with amplifying conductances that mediate a fast positive feedback loop (Cole and Baker, 1941; Mauro, 1961; Cole, 1968; Mauro et al., 1970; Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008). A similar combination of positive and negative feedback loop involving synaptic connectivity has been suggested to modulate the neuronal oscillation frequency during sensory processing in neuronal circuit model of layers 2 and 3 of sensory cortex (Lee et al., 2018). Furthermore, positive and negative feedback signals play critical roles in the emergence of neuronal polarity (Takano et al., 2019). Activity-dependent negative feedback mechanisms control the density of ion channels and receptors based on the levels of neural activity, resulting in homeostatic activity regulation (Bienenstock et al., 1982; Turrigiano et al., 1994; Turrigiano et al., 1995; Turrigiano et al., 1998; Desai et al., 1999; Turrigiano, 2007; O'Leary et al., 2010; O'Leary and Wyllie, 2011; Honnuraiah and Narayanan, 2013; O'Leary et al., 2014; Srikanth and Narayanan, 2015). Microglia have been shown to stabilize neuronal activity through a negative feedback loop that is dependent on extracellular ATP concentration, which is dependent on neural activity (Badimon et al., 2020). Models for neuronal networks have successfully utilized the negative feedback loop for achieving transfer of decorrelated inputs of olfactory information in the paleocortex (Ambros-Ingerson et al., 1990), in the reduction of redundancy in information transfer in the visual pathway (Pece, 1992), in a model of the LGN inhibited by the V1 to achieve specificity (Murphy and Sillito, 1987), and in a model of cerebellum based on feedback motifs (D'Angelo et al., 2016).

Finally, an important distinction between the phenomenological and the mechanistic resonators is that the former is activity independent, whereas the latter is activity dependent in terms of their ability to suppress low-frequency signals. This distinction explains the differences between how these two resonators act on homogeneous and heterogeneous CAN networks, especially in terms of suppressing low-frequency power (Figure 13, Figure 13—figure supplements 13). Together, the incorporation of resonance through a negative feedback loop allowed us to link our analyses to the well-established role of network motifs involving negative feedback loops in inducing stability and suppressing external/internal noise in engineering and biological systems. We envisage intrinsic neuronal resonance as a cellular-scale activity-dependent negative feedback mechanism, a specific instance of a well-established network motif that effectuates stability and suppresses perturbations across different networks.

Future directions and considerations in model interpretation

Our analyses here employed a rate-based CAN model for the generation of grid-patterned activity. Rate-based models are inherently limited in their ability to assess temporal relationships between spike timings, which are important from the temporal coding perspective where spike timings with reference to extracellular oscillations have been shown to carry spatial information within grid fields (Hafting et al., 2008). Additionally, the CAN model is one of the several theoretical and computational frameworks that have been proposed for the emergence of grid-cell activity patterns, and there are lines of experimental evidence that support aspects of these distinct models (Kropff and Treves, 2008; Burak and Fiete, 2009; Burgess and O'Keefe, 2011; Giocomo et al., 2011a; Navratilova et al., 2012; Couey et al., 2013; Domnisoru et al., 2013; Schmidt-Hieber and Häusser, 2013; Yoon et al., 2013; Schmidt-Hieber et al., 2017; Urdapilleta et al., 2017; Stella et al., 2020; Tukker et al., 2021). Within some of these frameworks, resonating conductances have been postulated to play specific roles, distinct from the one proposed in our study, in the emergence of grid-patterned activity and the regulation of their properties (Giocomo et al., 2007; Giocomo et al., 2011b; Giocomo et al., 2011a; Pastoll et al., 2012). Together, the use of the rate-based CAN model has limitations in terms of assessing temporal relationships between oscillations and spike timings, and in deciphering the other potential roles of resonating conductances in grid-patterned firing. However, models that employ other theoretical frameworks do not explicitly incorporate the several heterogeneities in afferent, intrinsic, and synaptic properties of biological networks, including those in the conductance and gating properties of resonating conductances. Future studies should therefore explore the role of resonating conductances in stabilizing conductance-based grid-cell networks that are endowed with all forms of biological heterogeneities. Such conductance-based analyses should also systematically assess the impact of resonating conductances, their kinetics, and gating properties (including associated heterogeneities) in regulating temporal relationships of spike timings with theta-frequency oscillations spanning the different theoretical frameworks.

A further direction for future studies could be the use of morphologically realistic conductance-based model neurons, which would enable the incorporation of the distinct ion channels and receptors distributed across the somato-dendritic arborization. Such models could assess the role of interactions among several somato-dendritic conductances, especially with resonating conductances, in regulating grid-patterned activity generation (Burgess and O'Keefe, 2011; Giocomo et al., 2011a). In addition, computations performed by such a morphologically realistic conductance-based neuron are more complex than the simplification of neural computation as an integrator or a resonator (Schmidt-Hieber et al., 2017). For instance, owing to the differential distribution of ionic conductances, different parts of the neurons could exhibit integrator- or resonator-like characteristics, with interactions among different compartments yielding the final outcome (Das and Narayanan, 2017; Das et al., 2017). The conclusions of our study emphasizing the importance of biological heterogeneities and resonating conductances in grid-cell models underline the need for heterogeneous, morphologically realistic conductance-based network models to systematically compare different theoretical frameworks for grid-cell emergence. Future studies should endeavor to build such complex heterogeneous networks, endowed with synaptic and channel noise, in systematically assessing the role of heterogeneities and specific ion-channel conductances in the emergence of grid-patterned neural activity across different theoretical frameworks.

In summary, our analyses demonstrated that incorporation of different forms of biological heterogeneities disrupted network functions through perturbations that were predominantly in the lower frequency components. We showed that intrinsic neuronal resonance, a mechanism that suppressed low-frequency activity, stabilized network function. As biological heterogeneities are ubiquitous and as the dominance of low-frequency perturbations is pervasive across biological networks (Hausdorff and Peng, 1996; Gilden, 2001; Gisiger, 2001; Ward, 2001; Buzsaki, 2006), we postulate that mechanisms that suppress low-frequency components could be a generalized route to stabilize heterogeneous biological networks.

Materials and methods

Development of a virtual trajectory to reduce computational cost

Request a detailed protocol

We developed the following algorithm to yield faster virtual trajectories in either a circular (2 m diameter) or a square (2 m × 2 m) arena:

  1. The starting location of the virtual animal was set at the center of the arena (x0 = 1, y0 = 1).

  2. At each time step (=1 ms), two random numbers were picked, one denoting distance from a uniform distribution (dt[0,0.004]) and another yielding the angle of movement from another uniform distribution (At[-π/36,π/36]). The angle of movement was restricted to within π/36 on either side to yield smooth movements within the spatial arena. The new coordinate of the animal was then updated as:

    • (1) xt=xt-1+dtsinAt
    • (2) yt=yt-1+dtcos(At)
    • If the new location (xtyt) fell outside the arena, the dt and At are repicked until (xtyt) were inside the bounds of the arena.

  3. To enable sharp turns near the boundaries of the arena, the At random variable was picked from a uniform distribution of (At[0,2π]) instead of uniform distribution of (At[-π/36,π/36]) if either xt-1 or yt-1 was close to arena boundaries. This enhanced range for At closed to the boundaries ensured that there was enough deflection in the trajectory to mimic sharp turns in animal runs in open arena boundaries.

The limited range of At[-π/36,π/36] in step two ensured that the head direction and velocity inputs to the neurons in the CAN model were not changing drastically at every time step of the simulation run, thereby stabilizing spatial activity. We found the virtual trajectories yielded by this algorithm to closely mimic animal runs in an open arena, with better control over simulation periods and better computational efficiency in terms of covering the entire arena within shorter time duration.

Structure and dynamics of the continuous attractor network model with integrator neurons

Request a detailed protocol

Each neuron in the network has a preferred direction θi (assumed to receive input from specific head direction cells), which can take its value to be among 0, π/2, π, and 3π/2, respectively, depicting east, north, west, and south. The network was divided into local blocks of four cells representing each of these four directions and local blocks were uniformly tiled to span the entire network. This organization translated to a scenario where one-fourth of the neurons in the network were endowed with inputs that had the same direction preference. Of the two sets of synaptic inputs to neurons in the network, intra-network inputs followed a Mexican-hat connectivity pattern. The recurrent weights matrix for Mexican-hat connectivity was achieved through the difference of Gaussian equation, given by Burak and Fiete, 2009:

(3) Wij=W0(xixjle^θj)
(4) W0(x)=aexp(γ|x|2)exp(β|x|2)

where Wij represented the synaptic weight from neuron j (located at xj) to neuron i (located at xi), e^θj defined the unit vector pointing along the θj direction. This weight matrix was endowed with a center-shifted center-surround connectivity, and the parameter l (default value: 2) defined the amount of shift along e^θj. In the difference of Gaussians formulation in Equation 6, the parameter a regulated the sign of the synaptic weights and was set to 1, defining an all-inhibitory network. The other parameters were γ = 1.1 × β with β = 3/λ2, and λ (default value: 13) defining the periodicity of the lattice (Burak and Fiete, 2009).

The second set of synaptic inputs to individual neurons, arriving as feed-forward inputs based on the velocity of the animal and the preferred direction of the neuron, was computed as:

(5) Bi=1+αe^θjv

where α denotes a gain parameter for the velocity inputs (velocity scaling factor), v represents the velocity vector derived from the trajectory of the virtual animal.

The dynamics of the rate-based integrator neurons, driven by these two sets of inputs was then computed as:

(6) τdSidt+Si=f(jWijSj+Bi)

where f represented the neural transfer function, which was implemented as a simple rectification non-linearity, and Si(t) denoted the activity of neuron i at time point t. The default value of the integration time constant (τ) of neural response was 10 ms. CAN models were initialized with randomized values of Si (Si0 [ 0, 1 ]) for all neurons. For stable spontaneous pattern to emerge over the neural lattice, an initial constant feed-forward drive was provided by ensuring the velocity input was zero for the initial 100 ms period. The time step (dt) for numerical integration was 0.5 ms when we employed the real trajectory and 1 ms for simulations with virtual trajectories. We confirmed that the use of 1 ms time steps for virtual trajectories did not hamper accuracy of the outcomes. Activity patterns of all neurons were recorded for each time step. For visualization of the results, the spike threshold was set at 0.1 (a.u.).

Incorporating biological heterogeneities into the CAN model

Request a detailed protocol

We introduced intrinsic, afferent, and synaptic forms of biological heterogeneities by independently randomizing the values of integration time constant (τ), velocity scaling factor (α), and the connectivity matrix (Wij), respectively, across neurons in the CAN model. Specifically, in the homogeneous CAN model, these parameters were set to a default value and were identical across all neurons in the network. However, in heterogeneous networks, each neuron in the network was assigned a different value for these parameters, each picked from respective uniform distributions (Table 1). We progressively expanded the ranges of the respective uniform distributions to progressively increase the degree of heterogeneity (Table 1). We built CAN models with four different forms of heterogeneities: networks that were endowed with one of intrinsic, afferent, and synaptic forms of heterogeneities, and networks that expressed all forms together. In networks that expressed only one form of heterogeneity, the other two parameters were set identical across all neurons. In networks expressing all forms of heterogeneities, all three parameters were randomized with the span of the uniform distribution for each parameter concurrently increasing with the degree of heterogeneity (Table 1). We simulated different trials of a CAN model by employing different sets of initial randomization of activity values (Si0) for all the cells, while keeping all other model parameters (including the connectivity matrix, trajectory, and the specific instance of heterogeneities) unchanged (Figure 3—figure supplement 1).

Introducing resonance in rate-based neurons: phenomenological model

Request a detailed protocol

To assess the frequency-dependent response properties, we employed a chirp stimulus c100(t), defined a constant-amplitude sinusoidal input with its frequency linearly increasing as a function of time, spanning 0–100 Hz in 100 s. We fed c100(t) as the input to a rate-based model neuron and recorded the response of the neuron:

(7) τdsdt+s=c100

We computed the Fourier transform of the response  s(t) as S(f) and employed the magnitude of S(f) to evaluate the frequency-dependent response properties of the neuron. Expectedly, the integrator neuron acted as a LPF (Figure 5A,B).

A simple means to elicit resonance from the response of a low-pass system is to feed the output of the low-pass system to a HPF, and the interaction between these filters results in resonance (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008). We tested this employing the c100(t) stimulus, by using the low-pass response s(t) to the chirp stimulus from equation 7:

(8) h=s(dsdt)ε

Here, h(t) represented the output of the resultant resonator neuron, ε defined an exponent that regulates the slope of the frequency-selective response properties of the high-pass filter. When ε=0, h(t) trivially falls back to the low-pass response s(t). The magnitude of the Fourier transform of h(t), H(f) manifested resonance in response to the c100(t) stimulus (Figure 5A,C). This model for achieving resonance in single neurons was referred to as a phenomenological resonator.

Having confirmed that the incorporation of a HPF would yield resonating dynamics, we employed this formulation to define the dynamics of a resonator neuron in the CAN model through the combination of the existing low-pass kinetics (Equation 6) and the high-pass kinetics. Specifically, we obtained Si for each neuron i in the CAN model from Equation 6 and redefined neural activity as the product of this Si (from Equation 6) and its derivative raised to an exponent:

(9) Si:=RSi(dSidt)ε

where R was a scaling factor for matching the response of resonator neurons with integrator neurons and ε defined the exponent of the high-pass filter. Together, whereas the frequency-dependent response of the integrator is controlled by integration time constant (τ), that of a resonator is dependent on τ of the integrator as well as the HPF exponent ε (Figure 5D,E).

To simulate homogeneous CAN models with resonator neurons, all integrator neurons in the standard homogeneous model (Equations 3–6) were replaced with resonator neurons. Intrinsic, synaptic, and afferent heterogeneities are introduced as previously described (Table 1) to build heterogeneous CAN models with resonator neurons. The other processes, including initialization procedure of the resonator neuron network, were identical to the integrator neuron network. In simulations where τ was changed from its base value of 10 ms, the span of uniform distributions that defined the five degrees of heterogeneities were appropriately rescaled and centered at the new value of τ. For instance, when τ was set to 8 ms, the spans of the uniform distribution for the first and the fifth degrees of heterogeneity were 6.4–9.6 ms (20% of the base value on either side) and 1–16 ms (100% of the base value on either side, with an absolute cutoff at 1 ms), respectively.

Mechanistic model for introducing intrinsic resonance in rate-based neurons

Request a detailed protocol

Neuronal intrinsic resonance was achieved by incorporating a slow negative feedback to the single-neuronal dynamics of rate-based neurons (Figure 9A). We tested the emergence of intrinsic resonance using the c100(t) stimulus described earlier (Equation 7; Figure 9D–E). The dynamics of mechanistic model of resonance as follows with the c100(t) stimulus:

(10) τdSdt=Sgm(S)+c100
(11) dmdt=mmτm

Here, S governed neuronal activity, m defined the feedback state variable, and g (default value 0.015) represented feedback strength. The slow kinetics of the negative feedback was controlled by the feedback time constant (τm) with default value of 75 ms. In order to manifest resonance, τm>τ. The steady-state feedback kernel (m) of the negative feedback is sigmoidally dependent on the output of the neuron (S), with default value of the half-maximal activity (S1/2) to be 0.3 and the slope (k) to be 0.1:

(12) m=(1+exp(S1/2Sk))1

The magnitude of the Fourier transform of S(t) in this system of differential equations, S(f), was assessed for the expression of resonance in response to the c100(t) stimulus (Figure 9D). This model for achieving resonance in single neurons was referred to as a mechanistic resonator.

These resonating neurons were incorporated within the CAN framework to assess how neuronal intrinsic resonance achieved through mechanistic means affected the grid-cell firing. The synaptic weight matrix (Equation 4) as well as the velocity dependence (Equation 5) associated with CAN model consisting of resonator neurons were identical to CAN model with integrator neurons, with the only difference in the single-neuronal dynamics:

(13) τdSidt=Sigm(Si)+f(jWijSj+Bi)

with m(Si) evolving as per Equation (11), implemented the activity-dependent slow negative feedback which is dependent on the current state (Si) of the ith neuron.

Quantitative analysis of grid-cell activity

Request a detailed protocol

To quantitatively assess the impact of heterogeneities and the introduction of resonance into the CAN neurons, we employed standard measurements of grid-cell activity (Fyhn et al., 2004; Hafting et al., 2005).

We divided the space of the open arena into 100 × 100 pixels to compute the rate maps of grid-cell activity in the network. Activity spatial maps were constructed for each cell by taking the activity (Si for cell i) of the cell at each time stamp and synchronizing the index of corresponding (x,y) location of the virtual animal in the open arena, for the entire duration of the simulation. Occupancy spatial maps were constructed by computing the probability (pm) of finding the rat in mth pixel, employing the relative time spent by the animal in that pixel across the total simulation period. Spatial rate maps for each cell in the network were computed by normalizing their activity spatial maps by the occupancy spatial map for that run. Spatial rate maps were smoothened using a 2D Gaussian kernel with standard deviation (σ) of 2 pixels (e.g., Figure 1A, panels in the bottom row for each value of Trun). Average firing rate (μ) of grid cells was computed by summing the activity of all the pixels from rate map matrix and dividing this quantity by the total number of pixels in the rate map (N = 10,000). Peak firing rate (μmax) of a neuron was defined as the highest activity value observed across all pixels of its spatial rate map.

For estimation of grid fields, we first detected all local maxima in the 2D-smoothened version of the spatial rate maps of all neurons in the CAN model. Individual grid-firing fields were identified as contiguous regions around the local maxima, spatially spreading to a minimum of 20% activity relative to the activity at the location of the local maxima. The number of grid fields corresponding to each grid cell was calculated as the total number of local peaks detected in the spatial rate map of the cell. The mean size of the grid fields for a specific cell was estimated by calculating the ratio between the total number of pixels covered by all grid fields and the number of grid fields. The average grid-field spacing for individual grid cells was computed as the ratio of the sum of distances between the local peaks of all grid fields in the rate map to the total number of distance values.

Grid score was computed by assessing rotational symmetry in the spatial rate map. Specifically, for each neuron in the CAN model, the spatial autocorrelation value, SACφ, was computed between its spatial rate map and the map rotated by φ°, for different values of φ  (30, 60, 90, 120, or 150). These SACφ values were used for calculating the grid score for the given cell as:

(14) Grid Score=min(SAC60, SAC120)max(SAC30, SAC90, SAC150)

Spatial information rate (IS) in bits per second was calculated by:

(15) IS=mpm μm log2μmμ

where pm defined the probability of finding the rat in mth pixel, μm represented the mean firing rate of the grid cell in the mth pixel, and µ denoted the average firing rate of grid cell. Sparsity was computed as the ratio between the square mean rate and mean square rate:

(16) Sparsity= μ2ipiμi2

Quantitative analysis of grid-cell temporal activity in the spectral domain

Request a detailed protocol

To understand the impact of network heterogeneities on spectral properties of grid-cell activities under the CAN framework, we used the Fourier transform of the temporal activity of all the grid cells in a network. First, we assessed the difference in the magnitude spectra of temporal activity of the grid cells (n = 3600) in the homogeneous network compared to the corresponding grid cells in the heterogeneous networks (e.g., Figure 4A). Next, we normalized this difference in magnitude spectra for each grid cell with respect to the sum of their respective maximum magnitude for the homogeneous and the heterogeneous networks. Quantitatively, if Shet(f) and Shomo(f) defined neuronal activity in spectral domain for a neuron in a heterogeneous network and for the same neuron in the corresponding homogeneous network, respectively, then the normalized difference was computed as:

(17) ΔShethomo(f)= Shet(f)Shomo(f)max(Shet(f))+ max(Shomo(f))

Note that this normalization was essential to account for potential differences in the maximum values of Shet(f) and Shomo(f). Finally, we computed the variance of this normalized difference (ΔShethomo(f)) across all the cells in the networks (e.g., Figure 4B).

In addition to using these normalized differences for quantifying spectral signatures of neural activity, we performed octave analysis on the magnitude spectra of the temporal activity of the grid cells to confirm the impact of heterogeneities or resonating neurons on different frequency octaves. Specifically, we computed the percentage of area under the curve (AUC) for each octave (0–2 Hz, 2–4 Hz, 4–8 Hz, and 8–16 Hz) from the magnitude spectra (Figure 13—figure supplement 1B,C). We performed a similar octave analysis on the variance of normalized difference for networks endowed with integrator or resonator neurons (Figure 13B, Figure 13—figure supplement 3D–F).

Computational details

Request a detailed protocol

Grid-cell network simulations were performed in MATLAB 2018a (Mathworks Inc, USA) with a simulation step size of 1 ms, unless otherwise specified. All data analyses and plotting were performed using custom-written software within the IGOR Pro (Wavemetrics, USA) or MATLAB environments, and all statistical analyses were performed using the R statistical package (http://www.R-project.org/). To avoid false interpretations and to emphasize heterogeneities in simulation outcomes, the entire range of measurements are reported in figures rather than providing only the summary statistics (Rathour and Narayanan, 2019).

Data availability

All data generated or analyzed during this study are included in the manuscript and supporting files. The custom-written simulations and analyses code (in MATLAB) employed for simulations are available as source code.

References

  1. Book
    1. Bode HW
    (1945)
    Network Analysis and Feedback Amplifier Design
    New York: Van Nostrand.
    1. Hausdorff JM
    2. Peng C
    (1996) Multiscaled randomness: a possible source of 1/f noise in biology
    Physical Review. E, Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 54:2154–2157.
    https://doi.org/10.1103/physreve.54.2154
  2. Book
    1. Sved AF
    (2009) Blood Pressure: Baroreceptors
    In: Squire L. R, editors. Encyclopedia of Neuroscience. Oxford: Academic Press. pp. 259–264.
    https://doi.org/10.1007/978-3-540-29678-2

Decision letter

  1. Timothy O'Leary
    Reviewing Editor; University of Cambridge, United Kingdom
  2. Ronald L Calabrese
    Senior Editor; Emory University, United States
  3. Alessandro Treves
    Reviewer; Scuola Internazionale Superiore di Studi Avanzati, Italy

Our editorial process produces two outputs: i) public reviews designed to be posted alongside the preprint for the benefit of readers; ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.

Acceptance summary:

Grid cells in the rodent entorhinal cortex are believed to contribute to internal representations of space and other continuous quantities via periodic firing patterns. Using extensive simulations, Mittal and Narayanan show that a leading continuous attractor model of how such patterns emerge is fragile to biologically relevant heterogeneities. The authors show how this fragility is rescued by introducing intrinsic resonance in the dynamics of cells in the network. Such resonance is widely observed the entorhinal system. This work therefore shows an important potential role for single cell properties in regulating network-level computations.

Decision letter after peer review:

Thank you for submitting your article "Resonating neurons stabilize heterogeneous grid-cell networks" for consideration by eLife. Your article has been reviewed by 2 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Ronald Calabrese as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Alessandro Treves (Reviewer #2).

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential revisions:

1) Address the reviewer's questions about the causal role of resonance in stabilising grid patterns in this specific continuous attractor model. Provide a clearer and more complete description of the single-neuron dynamics.

2) Substantiate the results with more systematic modelling or mathematical analysis, possibly in a simplified model, to provide intuition or demonstrate the mechanism underpinning the observed stabilisation of grid fields. How specific are these effects to the CAN model architecture and/or grid fields?

Reviewer #1 (Recommendations for the authors):

The authors succeed in conveying a clear and concise description of how intrinsic heterogeneity affects continuous attractor models. The main claim, namely that resonant neurons could stabilize grid-cell patterns in medial entorhinal cortex, is striking.

I am intrigued by the use of a nonlinear filter composed of the product of s with its temporal derivative raised to an exponent. Why this particular choice? Or, to be more specific, would a linear bandpass filter not have served the same purpose?

The magnitude spectra are subtracted and then normalized by a sum. I have slight misgivings about the normalization, but I am more worried that , as no specific formula is given, some MATLAB function has been used. What bothers me a bit is that, depending on how the spectrogram/periodogram is computed (in particular, averaged over windows), one would naturally expect lower frequency components to be more variable. But this excess variability at low frequencies is a major point in the paper.

Which brings me to the main thesis of the manuscript: given the observation of how heterogeneities increase the variability in the low temporal frequency components, the way resonant neurons stabilize grid patterns is by suppressing these same low frequency components.

I am not entirely convinced that the observed correlation implies causality. The low temporal frequency spectra are an indirect reflection of the regularity or irregularity of the pattern formation on the network, induced by the fact that there is velocity coupling to the input and hence dynamics on the network. Heterogeneities will distort the pattern on the network, that is true, but it isn't clear how introducing a bandpass property in temporal frequency space affects spatial stability causally.

Put it this way: imagine all neurons were true oscillators, only capable of oscillating at 8 Hz. If they were to synchronize within a bump, one will have the field blinking on and off. Nothing wrong with that, and it might be that such oscillatory pattern formation on the network might be more stable than non-oscillatory pattern formation (perhaps one could even demonstrate this mathematically, for equivalent parameter settings), but this kind of causality is not what is shown in the manuscript.

Reviewer #2 (Recommendations for the authors):

I believe in self-organization and NOT in normative recommendations by reviewers: do this, don't do that. Everybody should be able to publish, in some form, what they feel is important for others to know; so I applaud the new open reviews in eLife. Besides, this manuscript is written very well, clearly, I would say equanimously, and I do not have other points to raise beyond what I observed in the open review. The figures are attractive, maybe a bit too many and too rich, but clear and engaging. My only suggestion would be, take your band-pass units, and show that they produce grids without any recurrent network. It will be fun.

https://doi.org/10.7554/eLife.66804.sa1

Author response

Essential revisions:

1) Address the reviewer's questions about the causal role of resonance in stabilising grid patterns in this specific continuous attractor model. Provide a clearer and more complete description of the single-neuron dynamics.

To address this and associated concerns about the “brute force amputation of the low

frequencies” that we had employed earlier to construct resonators, we constructed a new mechanistic model for single-neuron resonance that matches the dynamical behavior of physiological resonators. Specifically, we noted that physiological resonance is elicited by a slow activity-dependent negative feedback (Hutcheon and Yarom, 2000). To incorporate resonance into our rate-based model neurons, we mimicked this by introducing a slow negative feedback loop into our single-neuron dynamics (the motivations are elaborated in the new results subsection “Mechanistic model of neuronal intrinsic resonance: Incorporating a slow activity-dependent negative feedback loop”). The single-neuron dynamics of mechanistic resonators were defined as follows:

τdsdt=Sgm(S)+le
dmdt=mmτm
m=(1+exp(S1/2Sk))1

Here, 𝑆 governed neuronal activity, 𝑚 defined the feedback state variable, 𝜏 represented the integration time constant, 𝐼𝑒 was the external current, and 𝑔 represented feedback strength. The slow kinetics of the negative feedback was controlled by the feedback time constant (𝜏𝑚). In order to manifest resonance, 𝜏𝑚 > 𝜏 (Hutcheon and Yarom, 2000). The steady-state feedback kernel (𝑚) of the negative feedback is sigmoidally dependent on the output of the neuron (𝑆), defined by two parameters: half-maximal activity (𝑆1/2) and slope (𝑘). The single-neuron dynamics are elaborated in detail in the methods section (new subsection: Mechanistic model for introducing intrinsic resonance in rate-based neurons).

We first demonstrate that the introduction of a slow-negative feedback loop introduce

resonance into single-neuron dynamics (new Figure 9D–E). We performed systematic

sensitivity analyses associated with the parameters of the feedback loop and characterized the dependencies of intrinsic neuronal resonance on model parameters (new Figure 9F–I). We demonstrate that the incorporation of resonance through a negative feedback loop was able to generate grid-patterned activity in the 2D CAN model employed here, with clear dependencies on model parameters (new Figure 10; new Figure 10-Supplements1–2). Next, we incorporated heterogeneities into the network and demonstrated that the introduction of resonance through a negative feedback loop stabilized grid-patterned generation in the heterogeneous 2D CAN model (new Figure 11).

The mechanistic route to introducing resonance allowed us to probe the basis for the

stabilization of grid-patterned activity more thoroughly. Specifically, with physiological

resonators, resonance manifests only when the feedback loop is slow (new Figure 9I;

Hutcheon and Yarom, 2000). This allowed us an additional mechanistic handle to directly probe the role of resonance in stabilizing the grid patterned activity. We assessed the emergence of grid-patterned activity in heterogeneous CAN models constructed with networks constructors with neurons with different 𝜏𝑚 values (new Figure 12). Strikingly, we found that when 𝜏𝑚 value was small (resulting in fast feedback loops), there was no stabilization of grid-patterned activity in the CAN model, especially with the highest degree of heterogeneities (new Figure 12). With progressive increase in 𝜏#, the patterns stabilized with grid score increasing with 𝜏𝑚 =25 ms (new Figure 12) and beyond (new Figure 11B; 𝜏𝑚 =75 ms). Finally, our spectral analyses comparing frequency components of homogeneous vs. heterogeneous resonator networks (new Figure panels 13D–F) showed the suppression of low-frequency perturbations in heterogeneous CAN networks.

Together, the central hypothesis in our study was that intrinsic neuronal resonance could stabilize heterogeneous grid-cell networks through targeted suppression of low-frequency perturbations. In the revised manuscript, we present the following lines of evidence in support of this hypothesis (mentioned now in the first paragraph of the Discussion section of the revised manuscript):

1. Neural-circuit heterogeneities destabilized grid-patterned activity generation in a 2D CAN model (Figures 2–3).

2. Neural-circuit heterogeneities predominantly introduced perturbations in the low-frequency components of neural activity (Figure 4).

3. Targeted suppression of low-frequency components through phenomenological (Figure 5C) or through mechanistic (new Figure 9D) resonators resulted in stabilization of the heterogeneous CAN models (Figure 8 and new Figure 11). We note that the stabilization was achieved irrespective of the means employed to suppress low-frequency components: an activity-independent suppression of low-frequencies (Figure 5) or an activity-dependent slow negative feedback loop (new Figure 9).

4. Changing the feedback time constant 𝜏# in mechanistic resonators, without changes to neural gain or the feedback strength allowed us to control the specific range of frequencies that would be suppressed. Our analyses showed that a slow negative feedback loop, which results in targeted suppression of low-frequency components, was essential in stabilizing grid-patterned activity (new Figure 12). As the slow negative feedback loop and the resultant suppression of low frequencies mediates intrinsic resonance, these analyses provide important lines of evidence for the role of targeted suppression of low frequencies in stabilizing grid patterned activity.

5. We demonstrate that the incorporation of phenomenological (Figure 13A–C) or

mechanistic (new Figure panels 13D–F) resonators specifically suppressed lower

frequencies of activity in the 2D CAN model.

6. Finally, the incorporation of resonance through a negative feedback loop allowed us to link our analyses to the well-established role of network motifs involving negative feedback loops in inducing stability and suppressing external/internal noise in engineering and biological systems. We envisage intrinsic neuronal resonance as a cellular-scale activity dependent negative feedback mechanism, a specific instance of a well-established network motif that effectuates stability and suppresses perturbations across different networks (Savageau, 1974; Becskei and Serrano, 2000; Thattai and van Oudenaarden, 2001; Austin et al., 2006; Dublanche et al., 2006; Raj and van Oudenaarden, 2008; Lestas et al., 2010; Cheong et al., 2011; Voliotis et al., 2014). A detailed discussion on this important link to the stabilizing role of this network motif, with appropriate references to the literature is included in the new discussion subsection “Slow negative feedback: Stability, noise suppression, and robustness”.

We thank the reviewers and the editors for their comments, as it allowed us to introduce a physiologically-rooted model for resonance and provide more lines of evidence in support of our hypothesis. We have provided complete descriptions of single neuron dynamics associated with the three kinds of single-neuron models (integrators, phenomenological resonators and mechanistic resonators) in the methods section (equations 6–13), also providing detailed sensitivity analyses with reference to their frequency-dependent response properties (Figure 8 and new Figure 11).

2) Substantiate the results with more systematic modelling or mathematical analysis, possibly in a simplified model, to provide intuition or demonstrate the mechanism underpinning the observed stabilisation of grid fields. How specific are these effects to the CAN model architecture and/or grid fields?

As mentioned above (Q1), in the revised manuscript, we have provided additional systematic modelling and the additional lines of analyses with a physiologically-rooted mechanistic model for resonance (new Figures 9–12, 13D–F; four new Figure supplements). In probing the mechanism behind stabilization, we perturbed the time constant associated with the feedback loop, directly demonstrating the critical role of targeted suppression of neural activity in stabilizing grid-patterned activity in heterogeneous CAN models (new Figure 12). The incorporation of resonance through a negative feedback loop provides further intuition through the well-established role of negative feedback loops in stabilizing systems and reducing the impact of perturbations. This intuition, with appropriate references to engineering and biology literature, is now elaborated in the new discussion subsection “Slow negative feedback: Stability, noise suppression, and robustness”.

With reference to the question on the specificity of these effects to the CAN model architecture and/or grid fields, we submit that our conclusions are limited to the 2D rate-based CAN model presented here (Discussion subsection: “Future directions and considerations in model interpretation”). We postulate that mechanisms that suppress low-frequency components could be a generalized route to stabilize heterogeneous biological networks, based on our analyses and based on the well-established stabilizing role of negative feedback loops. However, we do not claim generalization to other networks or to other models for generating grid-patterned neural activity. As several theoretical frameworks and computational models do not explicitly incorporate the several heterogeneities in afferent, intrinsic and synaptic properties of biological networks, it is essential that the stability of these networks are first assessed in the presence of heterogeneities. The next step would be to assess if there are perturbations in low-frequency components as a consequence of introducing heterogeneities. Finally, if the introduction of heterogeneities destabilized network function and resulted in low-frequency perturbations, then the role of intrinsic resonators in stabilizing these heterogeneities could be probed. As heterogeneities could manifest in different ways, and as their impact on networks could be very different in other networks compared to the network assessed here, we believe that such detailed network- and heterogeneity-specific analyses would be essential before any generalization.

Finally, as our analyses are limited to the stabilizing role of resonating neurons in the

heterogeneous CAN network analyzed here, we have not explored the role of adaptation or resonance in the generation of grid-patterned neural activity. We have however noted with appropriate citations (Discussion subsection: “Future directions and considerations in model interpretation”, first paragraph) that there are other models for the generation of grid-patterned neural activity where resonance has distinct roles compared the stabilizing role proposed here. As the impact of neural heterogeneities have not been assessed in these other models, future studies could assess if heterogeneities could indeed destabilize these models and if resonance could act as a stabilizing mechanism there as well.

Reviewer #1 (Recommendations for the authors):

The authors succeed in conveying a clear and concise description of how intrinsic heterogeneity affects continuous attractor models. The main claim, namely that resonant neurons could stabilize grid-cell patterns in medial entorhinal cortex, is striking.

We thank the reviewer for their time and effort in evaluating our manuscript, and for their rigorous evaluation and positive comments on our study.

I am intrigued by the use of a nonlinear filter composed of the product of s with its temporal derivative raised to an exponent. Why this particular choice? Or, to be more specific, would a linear bandpass filter not have served the same purpose?

Please note that the exponent was merely a mechanism to effectively tune the resonance frequency of the resonating neuron. In the revised manuscript, we have introduced a new physiologically rooted means to introduce intrinsic neuronal resonance, thereby confirming that network stabilization achieved was independent of the formulation employed to achieve resonance.

The magnitude spectra are subtracted and then normalized by a sum. I have slight misgivings about the normalization, but I am more worried that , as no specific formula is given, some MATLAB function has been used. What bothers me a bit is that, depending on how the spectrogram/periodogram is computed (in particular, averaged over windows), one would naturally expect lower frequency components to be more variable. But this excess variability at low frequencies is a major point in the paper.

We have now provided the specific formula employed for normalization as equation (16) of the revised manuscript. We have also noted that this was performed to account for potential differences in the maximum value of the homogeneous vs. heterogeneous spectra. The details are provided in the Methods subsection “Quantitative analysis of grid cell temporal activity in the spectral domain” of the revised manuscript. Please note that what is computed is the spectra of the entire activity pattern, and not a periodogram or a scalogram. There was no tiling of the time-frequency plane involved, thus eliminating potential roles of variables there on the computation here.

In addition to using variances of normalized differences to quantify spectral distributions, we have also independently employed octave-based analyses (which doesn’t involve normalized differences) to strengthen our claims about the impact of heterogeneities and resonance on different bands of frequency. These octave-based analyses also confirm our conclusions on the impact of heterogeneities and neuronal resonance on low-frequency components.

Finally, we would like to emphasize that spectral computations are the same for different networks, with networks designed in such a way that there was only one component that was different. For instance, in introducing heterogeneities, all other parameters of the network (the specific trajectory, the seed values, the neural and network parameters, the connectivity, etc.) remained exactly the same with the only difference introduced being confined to the heterogeneities. Computation of the spectral properties followed identical procedures with activity from individual neurons in the two networks, and comparison was with reference to identically placed neurons in the two networks. Together, based on the several routes to quantifying spectral signatures, based on the experimental design involved, and based on the absence of any signal-specific tiling of the time-frequency plane, we argue that the impact of heterogeneities or the resonators on low-frequency components is not an artifact of the analysis procedures.

We thank the reviewer for raising this issue, as it helped us to elaborate on the analysis procedures employed in our study.

Which brings me to the main thesis of the manuscript: given the observation of how heterogeneities increase the variability in the low temporal frequency components, the way resonant neurons stabilize grid patterns is by suppressing these same low frequency components.

I am not entirely convinced that the observed correlation implies causality. The low temporal frequeny spectra are an indirect reflection of the regularity or irregularity of the pattern formation on the network, induced by the fact that there is velocity coupling to the input and hence dynamics on the network. Heterogeneities will distort the pattern on the network, that is true, but it isn't clear how introducing a bandpass property in temporal frequency space affects spatial stability causally.

Put it this way: imagine all neurons were true oscillators, only capable of oscillating at 8 Hz. If they were to synchronize within a bump, one will have the field blinking on and off. Nothing wrong with that, and it might be that such oscillatory pattern formation on the network might be more stable than non-oscillatory pattern formation (perhaps one could even demonstrate this mathematically, for equivalent parameter settings), but this kind of causality is not what is shown in the manuscript.

The central hypothesis of our study was that intrinsic neuronal resonance could stabilize heterogeneous grid-cell networks through targeted suppression of low-frequency perturbations. In the revised manuscript, we present the following lines of evidence in support of this hypothesis (mentioned now in the first paragraph of the Discussion section of the revised manuscript):

1. Neural-circuit heterogeneities destabilized grid-patterned activity generation in a 2D CAN model (Figures 2–3).

2. Neural-circuit heterogeneities predominantly introduced perturbations in the low frequency components of neural activity (Figure 4).

3. Targeted suppression of low-frequency components through phenomenological (Figure 5C) or through mechanistic (new Figure 9D) resonators resulted in stabilization of the heterogeneous CAN models (Figure 8 and new Figure 11). We note that the stabilization was achieved irrespective of the means employed to suppress low-frequency components: an activity-independent suppression of low-frequencies (Figure 5) or an activity-dependent slow negative feedback loop (new Figure 9).

4. Changing the feedback time constant 𝜏# in mechanistic resonators, without changes to neural gain or the feedback strength allowed us to control the specific range of frequencies that would be suppressed. Our analyses showed that a slow negative feedback loop, which results in targeted suppression of low-frequency components, was essential in stabilizing grid-patterned activity (new Figure 12). As the slow negative feedback loop and the resultant suppression of low frequencies mediates intrinsic resonance, these analyses provide important lines of evidence for the role of targeted suppression of low frequencies in stabilizing grid patterned activity.

5. We demonstrate that the incorporation of phenomenological (Figure 13A–C) or mechanistic (new Figure panels 13D–F) resonators specifically suppressed lower frequencies of activity in the 2D CAN model.

6. Finally, the incorporation of resonance through a negative feedback loop allowed us to link our analyses to the well-established role of network motifs involving negative feedback loops in inducing stability and suppressing external/internal noise in engineering and biological systems. We envisage intrinsic neuronal resonance as a cellular-scale activity-dependent negative feedback mechanism, a specific instance of a well-established network motif that effectuates stability and suppresses perturbations across different networks (Savageau, 1974; Becskei and Serrano, 2000; Thattai and van Oudenaarden, 2001; Austin et al., 2006; Dublanche et al., 2006; Raj and van Oudenaarden, 2008; Lestas et al., 2010; Cheong et al., 2011; Voliotis et al., 2014). A detailed discussion on this important link to the stabilizing role of this network motif, with appropriate references to the literature is included in the new discussion subsection “Slow negative feedback: Stability, noise suppression, and robustness”.

We thank the reviewer for their detailed comments. These comments helped us to introducing a more physiologically rooted mechanistic form of resonance, where we were able to assess the impact of slow kinetics of negative feedback on network stability, thereby providing more direct lines of evidence for our hypothesis. This also allowed us to link resonance to the well-established stability motif: the negative feedback loop. We also note that our analyses don’t employ resonance as a route to introducing oscillations in the network, but as a means for targeted suppression of low-frequency perturbations through a negative feedback loop. Given the strong quantitative links of negative feedback loops to introducing stability and suppressing the impact of perturbations in engineering applications and biological networks, we envisage intrinsic neuronal resonance as a stability-inducing cellular-scale activity-dependent negative feedback mechanism.

Reviewer #2 (Recommendations for the authors):

I believe in self-organization and NOT in normative recommendations by reviewers: do this, don't do that. Everybody should be able to publish, in some form, what they feel is important for others to know; so I applaud the new open reviews in eLife. Besides, this manuscript is written very well, clearly, I would say equanimously, and I do not have other points to raise beyond what I observed in the open review. The figures are attractive, maybe a bit too many and too rich, but clear and engaging. My only suggestion would be, take your band-pass units, and show that they produce grids without any recurrent network. It will be fun.

We thank the reviewer for their detailed and rigorous review, which helped us very much in strengthening the lines of evidence in support of our central conclusions in the manuscript. We thank the reviewer for their belief in self-organization in terms of how the review process should evolve.

We believe that the new resonator model, the demonstration of network stability with the mechanistic resonators and the strong theoretical link to the stability-through-negative feedback literature provides stronger support for our original conclusions. With reference to the grid-cell model without the recurrent network, we agree that it is an important direction to pursue. However, we also respectfully submit that the direction would be a digression from the central hypothesis of our analyses about the stabilizing role of resonating neurons in heterogeneous CAN-based grid-cell networks. We note that addressing the question on grid-patterned activity with individual resonator neurons, in conjunction place-cell inputs and intrinsic/synaptic plasticity, would answer the question of generation of grid-like patterns. However, as our central question relates to heterogeneous networks and stability therein, we instead focused on the reviewer’s comments on our resonator model and built a more realistic model for neural resonators and demonstrated stability with these resonator models.

The generalizability of our conclusions to other grid cells models is an important question, but would require exhaustive analyses of the impact of heterogeneities and resonance on the other models in terms of their ability to generating stable grid-patterned activity. We have stated the limitations of our analyses in the Discussion section, also suggesting directions for future research involving other models for grid-patterned activity generation.

We believe that the new lines of evidence presented in the revised manuscript with the new resonator model, along with the theoretical link to the stability literature involving negative feedbacks have considerably strengthened our central conclusions. We gratefully thank the reviewer for their rigorous and step-by-step review identifying the specific problems in our analyses; this enabled us to rectify these issues in the revised manuscript. We sincerely hope that the revised manuscript is stronger in terms of the lines of evidence presented to support the central conclusions of the overall analyses.

References

Austin DW, Allen MS, McCollum JM, Dar RD, Wilgus JR, Sayler GS, Samatova NF, Cox CD,

Simpson ML (2006) Gene network shaping of inherent noise spectra. Nature 439:608-

611.

Becskei A, Serrano L (2000) Engineering stability in gene networks by autoregulation. Nature 405:590-593.

Burak Y, Fiete IR (2009) Accurate path integration in continuous attractor network models of grid cells. PLoS computational biology 5:e1000291.

Burgess N, O'Keefe J (2011) Models of place and grid cell firing and theta rhythmicity. Current opinion in neurobiology 21:734-744.

Cheong R, Rhee A, Wang CJ, Nemenman I, Levchenko A (2011) Information transduction

capacity of noisy biochemical signaling networks. Science 334:354-358.

Couey JJ, Witoelar A, Zhang SJ, Zheng K, Ye J, Dunn B, Czajkowski R, Moser MB, Moser EI,

Roudi Y, Witter MP (2013) Recurrent inhibitory circuitry as a mechanism for grid formation. Nat Neurosci 16:318-324.

Crawford AC, Fettiplace R (1978) Ringing responses in cochlear hair cells of the turtle

[proceedings]. J Physiol 284:120P-122P.

Crawford AC, Fettiplace R (1981) An electrical tuning mechanism in turtle cochlear hair cells. J Physiol 312:377-412.

Domnisoru C, Kinkhabwala AA, Tank DW (2013) Membrane potential dynamics of grid cells. Nature 495:199-204.

Dublanche Y, Michalodimitrakis K, Kummerer N, Foglierini M, Serrano L (2006) Noise in

transcription negative feedback loops: simulation and experimental analysis. Molecular systems biology 2:41.

Fettiplace R, Fuchs PA (1999) Mechanisms of hair cell tuning. Annual review of physiology 61:809-834.

Giocomo LM, Moser MB, Moser EI (2011a) Computational models of grid cells. Neuron

71:589-603.

Giocomo LM, Zilli EA, Fransen E, Hasselmo ME (2007) Temporal frequency of subthreshold oscillations scales with entorhinal grid cell field spacing. Science 315:1719-1722.

Giocomo LM, Hussaini SA, Zheng F, Kandel ER, Moser MB, Moser EI (2011b) Grid cells use HCN1 channels for spatial scaling. Cell 147:1159-1170.

Hutcheon B, Yarom Y (2000) Resonance, oscillation and the intrinsic frequency preferences of neurons. Trends Neurosci 23:216-222.

Kropff E, Treves A (2008) The emergence of grid cells: Intelligent design or just adaptation? Hippocampus 18:1256-1269.

Lestas I, Vinnicombe G, Paulsson J (2010) Fundamental limits on the suppression of molecular fluctuations. Nature 467:174-178.

Navratilova Z, Giocomo LM, Fellous JM, Hasselmo ME, McNaughton BL (2012) Phase

precession and variable spatial scaling in a periodic attractor map model of medial

entorhinal grid cells with realistic after-spike dynamics. Hippocampus 22:772-789.

Pastoll H, Ramsden HL, Nolan MF (2012) Intrinsic electrophysiological properties of entorhinal cortex stellate cells and their contribution to grid cell firing fields. Frontiers in neural circuits 6:17.

Raj A, van Oudenaarden A (2008) Nature, nurture, or chance: stochastic gene expression and its consequences. Cell 135:216-226.

Savageau MA (1974) Comparison of classical and autogenous systems of regulation in

inducible operons. Nature 252:546-549.

Schmidt-Hieber C, Hausser M (2013) Cellular mechanisms of spatial navigation in the medial entorhinal cortex. Nat Neurosci 16:325-331.

Schmidt-Hieber C, Toleikyte G, Aitchison L, Roth A, Clark BA, Branco T, Hausser M (2017) Active dendritic integration as a mechanism for robust and precise grid cell firing. Nat Neurosci 20:1114-1121.

Stella F, Urdapilleta E, Luo Y, Treves A (2020) Partial coherence and frustration in selforganizing spherical grids. Hippocampus 30:302-313.

Thattai M, van Oudenaarden A (2001) Intrinsic noise in gene regulatory networks. Proc Natl Acad Sci U S A 98:8614-8619.

Tukker JJ, Beed P, Brecht M, Kempter R, Moser EI, Schmitz D (2021) Microcircuits for spatial coding in the medial entorhinal cortex. Physiol Rev.

Urdapilleta E, Si B, Treves A (2017) Selforganization of modular activity of grid cells.

Hippocampus 27:1204-1213.

Voliotis M, Perrett RM, McWilliams C, McArdle CA, Bowsher CG (2014) Information transfer by leaky, heterogeneous, protein kinase signaling systems. Proc Natl Acad Sci U S A 111:E326-333.

Yoon K, Buice MA, Barry C, Hayman R, Burgess N, Fiete IR (2013) Specific evidence of lowdimensional continuous attractor dynamics in grid cells. Nat Neurosci 16:1077-1084.

https://doi.org/10.7554/eLife.66804.sa2

Article and author information

Author details

  1. Divyansh Mittal

    Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
    Contribution
    Conceptualization, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4233-8176
  2. Rishikesh Narayanan

    Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
    Contribution
    Conceptualization, Resources, Formal analysis, Supervision, Funding acquisition, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    For correspondence
    rishi@iisc.ac.in
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-1362-4635

Funding

Wellcome Trust DBT India Alliance (IA/S/16/2/502727)

  • Rishikesh Narayanan

Human Frontier Science Program (Career development award)

  • Rishikesh Narayanan

Department of Biotechnology , Ministry of Science and Technology (DBT-IISc partnership Program)

  • Rishikesh Narayanan

Indian Institute of Science (Revati and Satya Nadham Atluri Chair Professorship)

  • Rishikesh Narayanan

Ministry of Human Resource Development (Scholarship funds)

  • Divyansh Mittal

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

The authors thank Dr. Poonam Mishra, Dr. Sufyan Ashhad, and the members of the cellular neurophysiology laboratory for helpful discussions and for comments on a draft of this manuscript. The authors thank Dr. Ila Fiete for helpful discussions. This work was supported by the Wellcome Trust-DBT India Alliance (Senior fellowship to RN; IA/S/16/2/502727), Human Frontier Science Program (HFSP) Organization (RN), the Department of Biotechnology through the DBT-IISc partnership program (RN), the Revati and Satya Nadham Atluri Chair Professorship (RN), and the Ministry of Human Resource Development (RN and DM).

Senior Editor

  1. Ronald L Calabrese, Emory University, United States

Reviewing Editor

  1. Timothy O'Leary, University of Cambridge, United Kingdom

Reviewer

  1. Alessandro Treves, Scuola Internazionale Superiore di Studi Avanzati, Italy

Version history

  1. Preprint posted: December 11, 2020 (view preprint)
  2. Received: January 22, 2021
  3. Accepted: July 29, 2021
  4. Accepted Manuscript published: July 30, 2021 (version 1)
  5. Version of Record published: August 11, 2021 (version 2)

Copyright

© 2021, Mittal and Narayanan

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,684
    Page views
  • 110
    Downloads
  • 3
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Divyansh Mittal
  2. Rishikesh Narayanan
(2021)
Resonating neurons stabilize heterogeneous grid-cell networks
eLife 10:e66804.
https://doi.org/10.7554/eLife.66804

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Huu Hoang, Shinichiro Tsutsumi ... Keisuke Toyama
    Research Article Updated

    Cerebellar climbing fibers convey diverse signals, but how they are organized in the compartmental structure of the cerebellar cortex during learning remains largely unclear. We analyzed a large amount of coordinate-localized two-photon imaging data from cerebellar Crus II in mice undergoing ‘Go/No-go’ reinforcement learning. Tensor component analysis revealed that a majority of climbing fiber inputs to Purkinje cells were reduced to only four functional components, corresponding to accurate timing control of motor initiation related to a Go cue, cognitive error-based learning, reward processing, and inhibition of erroneous behaviors after a No-go cue. Changes in neural activities during learning of the first two components were correlated with corresponding changes in timing control and error learning across animals, indirectly suggesting causal relationships. Spatial distribution of these components coincided well with boundaries of Aldolase-C/zebrin II expression in Purkinje cells, whereas several components are mixed in single neurons. Synchronization within individual components was bidirectionally regulated according to specific task contexts and learning stages. These findings suggest that, in close collaborations with other brain regions including the inferior olive nucleus, the cerebellum, based on anatomical compartments, reduces dimensions of the learning space by dynamically organizing multiple functional components, a feature that may inspire new-generation AI designs.

    1. Neuroscience
    Max L Sterling, Ruben Teunisse, Bernhard Englitz
    Tools and Resources Updated

    Ultrasonic vocalizations (USVs) fulfill an important role in communication and navigation in many species. Because of their social and affective significance, rodent USVs are increasingly used as a behavioral measure in neurodevelopmental and neurolinguistic research. Reliably attributing USVs to their emitter during close interactions has emerged as a difficult, key challenge. If addressed, all subsequent analyses gain substantial confidence. We present a hybrid ultrasonic tracking system, Hybrid Vocalization Localizer (HyVL), that synergistically integrates a high-resolution acoustic camera with high-quality ultrasonic microphones. HyVL is the first to achieve millimeter precision (~3.4–4.8 mm, 91% assigned) in localizing USVs, ~3× better than other systems, approaching the physical limits (mouse snout ~10 mm). We analyze mouse courtship interactions and demonstrate that males and females vocalize in starkly different relative spatial positions, and that the fraction of female vocalizations has likely been overestimated previously due to imprecise localization. Further, we find that when two male mice interact with one female, one of the males takes a dominant role in the interaction both in terms of the vocalization rate and the location relative to the female. HyVL substantially improves the precision with which social communication between rodents can be studied. It is also affordable, open-source, easy to set up, can be integrated with existing setups, and reduces the required number of experiments and animals.