Resonating neurons stabilize heterogeneous grid-cell networks
Abstract
A central theme that governs the functional design of biological networks is their ability to sustain stable function despite widespread parametric variability. Here, we investigated the impact of distinct forms of biological heterogeneities on the stability of a two-dimensional continuous attractor network (CAN) implicated in grid-patterned activity generation. We show that increasing degrees of biological heterogeneities progressively disrupted the emergence of grid-patterned activity and resulted in progressively large perturbations in low-frequency neural activity. We postulated that targeted suppression of low-frequency perturbations could ameliorate heterogeneity-induced disruptions of grid-patterned activity. To test this, we introduced intrinsic resonance, a physiological mechanism to suppress low-frequency activity, either by adding an additional high-pass filter (phenomenological) or by incorporating a slow negative feedback loop (mechanistic) into our model neurons. Strikingly, CAN models with resonating neurons were resilient to the incorporation of heterogeneities and exhibited stable grid-patterned firing. We found CAN models with mechanistic resonators to be more effective in targeted suppression of low-frequency activity, with the slow kinetics of the negative feedback loop essential in stabilizing these networks. As low-frequency perturbations (1/f noise) are pervasive across biological systems, our analyses suggest a universal role for mechanisms that suppress low-frequency activity in stabilizing heterogeneous biological networks.
Introduction
Stability of network function, defined as the network’s ability to elicit robust functional outcomes despite perturbations to or widespread variability in its constitutive components, is a central theme that governs the functional design of several biological networks. Biological systems exhibit ubiquitous parametric variability spanning different scales of organization, quantified through statistical heterogeneities in the underlying parameters. Strikingly, in spite of such large-scale heterogeneities, outputs of biological networks are stable and are precisely tuned to meet physiological demands. A central question that spans different scales of organization is on the ability of biological networks to achieve physiological stability in the face of ubiquitous parametric variability (Turrigiano and Nelson, 2000; Edelman and Gally, 2001; Maslov and Sneppen, 2002; Stelling et al., 2004; Marder and Goaillard, 2006; Barkai and Shilo, 2007; Kitano, 2007; Félix and Barkoulas, 2015).
Biological heterogeneities are known to play critical roles in governing stability of network function, through intricate and complex interactions among mechanisms underlying functional emergence (Edelman and Gally, 2001; Renart et al., 2003; Marder and Goaillard, 2006; Tikidji-Hamburyan et al., 2015; Mishra and Narayanan, 2019; Rathour and Narayanan, 2019). However, an overwhelming majority of theoretical and modeling frameworks lack the foundation to evaluate the impact of such heterogeneities on network output, as they employ unnatural homogeneous networks in assessing network function. The paucity of heterogeneous network frameworks is partly attributable to the enormous analytical or computational costs involved in assessing heterogeneous networks. In this study, we quantitatively address questions on the impact of distinct forms of biological heterogeneities on the functional stability of a two-dimensional continuous attractor network (CAN), which has been implicated in the generation of patterned neuronal activity in grid cells of the medial entorhinal cortex (Burak and Fiete, 2009; Knierim and Zhang, 2012; Couey et al., 2013; Domnisoru et al., 2013; Schmidt-Hieber and Häusser, 2013; Yoon et al., 2013; Tukker et al., 2021). Although the continuous attractor framework has offered insights about information encoding across several neural circuits (Samsonovich and McNaughton, 1997; Seung et al., 2000; Renart et al., 2003; Wills et al., 2005; Burak and Fiete, 2009; Knierim and Zhang, 2012; Schmidt-Hieber and Häusser, 2013; Yoon et al., 2013; Kim et al., 2017), the fundamental question on the stability of 2D CAN models in the presence of biological heterogeneities remains unexplored. Here, we systematically assessed the impact of biological heterogeneities on stability of emergent spatial representations in a 2D CAN model and unveiled a physiologically plausible neural mechanism that promotes stability despite the expression of heterogeneities.
We first developed an algorithm to generate virtual trajectories that closely mimicked animal traversals in an open arena, to provide better computational efficiency in terms of covering the entire arena within shorter time duration. We employed these virtual trajectories to drive a rate-based homogeneous CAN model that elicited grid-patterned neural activity (Burak and Fiete, 2009) and systematically introduced different degrees of three distinct forms of biological heterogeneities. The three distinct forms of biological heterogeneities that we introduced, either individually or together, were in neuronal intrinsic properties, in afferent inputs carrying behavioral information and in local-circuit synaptic connectivity. We found that the incorporation of these different forms of biological heterogeneities disrupted the emergence of grid-patterned activity by introducing perturbations in neural activity, predominantly in low-frequency components. In the default model where neurons were integrators, grid patterns and spatial information in neural activity were progressively lost with increasing degrees of biological heterogeneities and were accompanied by progressive increases in low-frequency perturbations.
As heterogeneity-induced perturbations to neural activity were predominantly in the lower frequencies, we postulated that suppressing low-frequency perturbations could ameliorate the disruptive impact of biological heterogeneities on grid-patterned activity. We recognized intrinsic neuronal resonance as an established biological mechanism that suppresses low-frequency components, effectuated by the expression of resonating conductances endowed with specific biophysical characteristics (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008). Consequently, we hypothesized that intrinsic neuronal resonance could stabilize the heterogeneous grid-cell network through targetted suppression of low-frequency perturbations. To test this hypothesis, we developed two distinct strategies to introduce intrinsic resonance in our rate-based neuronal model to mimic the function of resonating conductances in biological neurons: (1) a phenomenological approach where an additional tunable high-pass filter (HPF) was incorporated into single-neuron dynamics and (2) a mechanistic approach where resonance was realized through a slow negative feedback loop akin to the physiological mechanism behind neuronal intrinsic resonance (Hutcheon and Yarom, 2000). We confirmed that the emergence of grid-patterned activity was not affected by replacing all the integrator neurons in the homogeneous CAN model with theta-frequency resonators (either phenomenological or mechanistic). We systematically incorporated different forms of biological heterogeneities into the 2D resonator CAN model and found that intrinsic neuronal resonance stabilized heterogeneous neural networks, through suppression of low-frequency components of neural activity. Although this stabilization was observed with both phenomenological and mechanistic resonator networks, the mechanistic resonator was extremely effective in suppressing low-frequency activity without introducing spurious high-frequency components into neural activity. Importantly, we found that the slow kinetics of the negative feedback loop was essential in stabilizing CAN networks built with mechanistic resonators.
Together, our study unveils an important role for intrinsic neuronal resonance in stabilizing network physiology through the suppression of heterogeneity-induced perturbations in low-frequency components of network activity. Our analyses suggest that intrinsic neuronal resonance constitutes a cellular-scale activity-dependent negative feedback mechanism, a specific instance of a well-established network motif that effectuates stability and suppresses perturbations across different biological networks (Savageau, 1974; Becskei and Serrano, 2000; Thattai and van Oudenaarden, 2001; Austin et al., 2006; Dublanche et al., 2006; Raj and van Oudenaarden, 2008; Lestas et al., 2010; Cheong et al., 2011; Voliotis et al., 2014). As the dominance of low-frequency perturbations is pervasive across biological networks (Hausdorff and Peng, 1996; Gilden, 2001; Gisiger, 2001; Ward, 2001; Buzsaki, 2006), we postulate that mechanisms that suppress low-frequency components could be a generalized route to stabilize heterogeneous biological networks.
Results
The rate-based CAN model consisting of a 2D neural sheet (default size: 60 × 60 = 3600 neurons) with the default integrator neurons for eliciting grid-cell activity was adopted from Burak and Fiete, 2009. The sides of this neural sheet were connected, yielding a toroidal (or periodic) network configuration. Each neuron in the CAN model received two distinct sets of synaptic inputs, one from other neurons within the network and another feed-forward afferent input that was dependent on the velocity of the virtual animal. The movement of the virtual animal was modeled to occur in a circular 2D spatial arena (Figure 1A; Real trajectory) and was simulated employing recordings of rat movement (Hafting et al., 2005) to replicate earlier results (Burak and Fiete, 2009). However, the real trajectory spanned 590 s of real-world time and required considerable simulation time to sufficiently explore the entire arena, essential for computing high-resolution spatial activity maps. Consequently, the computational costs required for exploring the parametric space of the CAN model, with different forms of network heterogeneities in networks of different sizes and endowed with distinct kinds of neurons, were exorbitant. Therefore, we developed a virtual trajectory, mimicking smooth animal movement within the arena, but with sharp turns at the borders.
Our analyses demonstrated the ability of virtual trajectories to cover the entire arena with lesser time (Figure 1A; Virtual trajectory), while not compromising on the accuracy of the spatial activity maps constructed from this trajectory in comparison to the maps obtained with the real trajectory (Figure 1B, Figure 1—figure supplement 1). Specifically, the correlation values between the spatial autocorrelation of rate maps of individual grid cells for real trajectory and that from the virtual trajectory at different rat runtimes () were high across all measured runtimes (Figure 1B). These correlation values showed a saturating increase with increase in . As there was no substantial improvement in accuracy beyond = 100 s, we employed = 100 s for all simulations (compared to the 590 s of the real trajectory). Therefore, our virtual trajectory covered similar areas in approximately six times lesser duration (Figure 1A), which reduced the simulation duration by a factor of ~10 times when compared to the real rat trajectory, while yielding similar spatial activity maps (Figure 1B, Figure 1—figure supplement 1). Our algorithm also allowed us to generate fast virtual trajectories mimicking rat trajectories in open arenas of different shapes (Circle: Figure 1A; Square: Figure 2E).
Biologically prevalent network heterogeneities disrupted the emergence of grid-cell activity in CAN models
The simulated CAN model (Figure 1) is an idealized homogeneous network of intrinsically identical neurons, endowed with precise local connectivity patterns and identical response properties for afferent inputs. Although this idealized CAN model provides an effective phenomenological framework to understand and simulate grid-cell activity, the underlying network does not account for the ubiquitous biological heterogeneities that span neuronal intrinsic properties and synaptic connectivity patterns. Would the CAN model sustain grid-cell activity in a network endowed with different degrees of biological heterogeneities in neuronal intrinsic properties and strengths of afferent/local synaptic inputs?
To address this, we systematically introduced three distinct forms of biological heterogeneities into the rate-based CAN model. We introduced intrinsic heterogeneity by randomizing the value of the neuronal integration time constant across neurons in the network. Within the rate-based framework employed here, randomization of reflected physiological heterogeneities in neuronal excitability properties, and larger spans of defined higher degrees of intrinsic heterogeneity (Table 1; Figure 2A). Afferent heterogeneity referred to heterogeneities in the coupling of afferent velocity inputs onto individual neurons. Different degrees of afferent heterogeneities were introduced by randomizing the velocity scaling factor () in each neuron through uniform distributions of different ranges (Table 1; Figure 2B). Synaptic heterogeneity involved the local connectivity matrix, and was introduced as additive jitter to the default center-surround connectivity matrix. Synaptic jitter was independently sampled for each connection, from a uniform distribution whose differential span regulated the associated higher degree of heterogeneity (Table 1; Figure 2C). In homogeneous networks, (=10 ms), (=45) and (center-shifted Mexican hat connectivity without jitter) were identical for all neurons in the network. We assessed four distinct sets of heterogeneous networks: three sets endowed with one of intrinsic, afferent and synaptic heterogeneities, and a fourth where all three forms of heterogeneities were co-expressed. When all heterogeneities were present together, all three sets of parameters were set randomly with the degree of heterogeneity defining the bounds of the associated distribution (Table 1).
We found that the incorporation of any of the three forms of heterogeneities into the CAN model resulted in the disruption of grid pattern formation, with the deleterious impact increasing with increasing degree of heterogeneity (Figures 2E and 3). Quantitatively, we employed grid score (Fyhn et al., 2004; Hafting et al., 2005) to measure the emergence of pattern formation in the CAN model activity, and found a reduction in grid score with increase in the degree of each form of heterogeneity (Figure 2F). We found a hierarchy in the disruptive impact of different types of heterogeneities, with synaptic heterogeneity producing the largest reduction to grid score, followed by afferent heterogeneity. Intrinsic heterogeneity was the least disruptive in the emergence of grid-cell activity, whereby a high degree of heterogeneity was required to hamper grid pattern formation (Figure 2E,F). Simultaneous progressive introduction of all three forms of heterogeneities at multiple degrees resulted in a similar progression of grid-pattern disruption (Figure 2E,F). The introduction of the highest degree of all heterogeneities into the CAN model resulted in a complete loss of patterned activity and a marked reduction in the grid score values of all neurons in the network (Figure 2E,F). We confirmed that these results were not artifacts of specific network initialization choices by observing similar grid score reductions in five additional trials with distinct initializations of CAN models endowed with all three forms of heterogeneities (Figure 3—figure supplement 1F). In addition, to rule out the possibility that these conclusions were specific to the choice of the virtual trajectory employed, we repeated our simulations and analyses with a different virtual trajectory (Figure 3—figure supplement 2A) and found similar reductions in grid score with increase in the degree of all heterogeneities (Figure 3—figure supplement 2B).
Detailed quantitative analysis of grid-cell activity showed that the introduction of heterogeneities did not result in marked population-wide changes to broad measures such as average firing rate, mean size of grid fields and average grid spacing (Figure 3, Figure 3—figure supplements 1–2). Instead, the introduction of parametric heterogeneities resulted in a loss of spatial selectivity and disruption of patterned activity, reflected as progressive reductions in grid score, peak firing rate, information rate and sparsity (Figures 2–3, Figure 3—figure supplements 1–2). Our results showed that in networks with intrinsic or afferent heterogeneities, grid score of individual neurons was not dependent on the actual value of the associated parameter ( or α, respectively), but was critically reliant on the degree of heterogeneity (Figure 3H). Specifically, for a given degree of intrinsic or afferent heterogeneity, the grid score value spanned similar ranges for the low or high value of the associated parameter; but grid score reduced with increased degree of heterogeneities. In addition, grid score of cells reduced with increase in local synaptic jitter, which increased with the degree of synaptic heterogeneity (Figure 3H).
Thus far, our analyses involved a CAN model with a specific size (60×60). Would our results on heterogeneity-driven disruption of grid-patterned activity extend to CAN models of other sizes? To address this, we repeated our simulations with networks of size 40×40, 50×50, 80×80 and 120×120, and introduced different degrees of all three forms of heterogeneities into the network. We found that the disruptive impact of heterogeneities on grid-patterned activity was consistent across all tested networks (Figure 3—figure supplement 3A), manifesting as a reduction in grid score with increased degree of heterogeneities (Figure 3—figure supplement 3B). Our results also showed that the disruptive impact of heterogeneities on grid-patterned activity increased with increase in network size. This size dependence specifically manifested as a complete loss of spatially selective firing patterns (Figure 3—figure supplement 3A) and the consistent drop in the grid score value to zero (i.e., %change = –100 in Figure 3—figure supplement 3B) across all neurons in the 120 × 120 network with high degree of heterogeneities. We also observed a progressive reduction in the average and the peak firing rates with increase in the degree of heterogeneities across all network sizes, although with a pronounced impact in networks with higher size (Figure 3—figure supplement 3C,D).
Together, our results demonstrated that the introduction of physiologically relevant heterogeneities into the CAN model resulted in a marked disruption of grid-patterned activity. The disruption manifested as a loss in spatial selectivity in the activity of individual cells in the network, progressively linked to the degree of heterogeneity. Although all forms of heterogeneities resulted in disruption of patterned activity, there was differential sensitivity to different heterogeneities, with heterogeneity in local network connectivity playing a dominant role in hampering spatial selectivity and patterned activity.
Incorporation of biological heterogeneities predominantly altered neural activity in low frequencies
How does the presence of heterogeneities affect activity patterns of neurons in the network as they respond to the movement of the virtual animal? A systematic way to explore patterns of neural activity is to assess the relative power of specific frequency components in neural activity. Therefore, we subjected the temporal outputs of individual neurons (across the entire period of the simulation) to spectral analysis and found neural activity to follow a typical response curve that was dominated by lower frequency components, with little power in the higher frequencies (e.g., Figure 4A, HN). This is to be expected as a consequence of the low-pass filtering inherent to the neuronal model, reflective of membrane filtering.
We performed spectral analyses of temporal activity patterns of neurons from networks endowed with different forms of heterogeneities at various degrees (Figure 4). As biological heterogeneities were incorporated either by reducing or by increasing default parameter values (Table 1), there was considerable variability in how individual neurons responded to such heterogeneities (Figure 4). Specifically, when compared against the respective neuron in the homogeneous network, some neurons showed increases in activity magnitude at certain frequencies and others showed a reduction in magnitude (e.g., Figure 4A). To quantify this variability in neuronal responses, we first computed the difference between frequency-dependent activity profiles of individual neurons obtained in the presence of specific heterogeneities and of the same neuron in the homogeneous network (with identical initial conditions and subjected to identical afferent inputs). We computed activity differences for all the 3600 neurons in the network and plotted the variance of these differences as a function of frequency (e.g., Figure 4B). Strikingly, we found that the impact of introducing biological heterogeneities predominantly altered lower frequency components of neural activity, with little to no impact on higher frequencies. In addition, the variance in the deviation of neural activity from the homogenous network progressively increased with higher degrees of heterogeneities, with large deviations occurring in the lower frequencies (Figure 4).
Together, these observations provided an important insight that the presence of physiologically relevant network heterogeneities predominantly affected lower frequency components of neural activity, with higher degrees of heterogeneity yielding larger variance in the deviation of low-frequency activity from the homogeneous network.
Introducing intrinsic resonance in rate-based neurons: phenomenological model
Several cortical and hippocampal neuronal structures exhibit intrinsic resonance, which allows these structures to maximally respond to a specific input frequency, with neural response falling on either side of this resonance frequency (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2007; Narayanan and Johnston, 2008; Das et al., 2017). More specifically, excitatory (Erchova et al., 2004; Giocomo et al., 2007; Nolan et al., 2007; Garden et al., 2008; Pastoll et al., 2012) and inhibitory neurons (Boehlen et al., 2016) in the superficial layers of the medial entorhinal cortex manifest resonance in the theta frequency range (4–10 Hz). Therefore, the model of individual neurons in the CAN model as integrators of afferent activity is inconsistent with the resonating structure intrinsic to their physiological counterparts. Intrinsic resonance in neurons is mediated by the expression of slow restorative ion channels, such as the HCN or the M-type potassium, which mediate resonating conductances that suppress low-frequency inputs by virtue of their kinetics and voltage-dependent properties (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008; Hu et al., 2009). As the incorporation of biological heterogeneities predominantly altered low-frequency neural activity (Figure 4), we hypothesized that the expression of intrinsic neuronal resonance (especially the associated suppression of low-frequency components) could counteract the disruptive impact of biological heterogeneities on network activity, thereby stabilizing grid-like activity patterns.
An essential requirement in testing this hypothesis was to introduce intrinsic resonance in the rate-based neuronal models in our network. To do this, we noted that the integrative properties of the integrator model neuron are mediated by the low-pass filtering kinetics associated with the parameter . We confirmed the low-pass filter (LPF) characteristics of the integrator neurons by recording the response of individual neurons to a chirp stimulus (Figure 5A,B). As this provides an equivalent to the LPF associated with the membrane time constant, we needed a HPF to mimic resonating conductances that suppress low-frequency activity in biological neurons. A simple approach to suppress low-frequency components is to introduce a differentiator, which we confirmed by passing a chirp stimulus through a differentiator (Figure 5A,C). We therefore passed the outcome of the integrator (endowed with LPF characteristics) through a first-order differentiator (HPF), with the postulate that the net transfer function would manifest resonance. We tested this by first subjecting a chirp stimulus to the integrator neuron dynamics and feeding that output to the differentiator. We found that the net output expressed resonance, acting as a band-pass filter (Figure 5A,C).
Physiologically, tuning of intrinsic resonance to specific frequencies could be achieved by altering the characteristics of the HPF or the LPF (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008; Das et al., 2017; Mittal and Narayanan, 2018; Rathour and Narayanan, 2019). Matching this physiological tuning procedure, we tuned resonance frequency () in our rate-based resonator model either by changing that governs the LPF (Figure 5D,E) or by altering an exponent (Equation 9) that regulated the slope of the HPF on the frequency axis (Figure 5F–G). We found to decrease with an increase in (Figure 5E) or a reduction in (Figure 5G). In summary, we mimicked the biophysical mechanisms governing resonance in neural structures to develop a phenomenological methodology to introduce and tune intrinsic resonance, yielding a tunable band-pass filtering structure in rate-based model neurons. In this model, as resonance was introduced through a HPF, a formulation that does not follow the mechanistic basis of resonance in physiological systems, we refer this as a phenomenological model for resonating neurons.
Homogeneous CAN models constructed with phenomenological resonator neurons exhibited stable grid-patterned neural activity
How does the expression of intrinsic resonance in individual neurons of CAN models alter their grid-patterned neural activity? How do grid patterns respond to changes in resonance frequency, realized either by altering or ? To address these, we replaced all neurons in the homogeneous CAN model with theta-frequency resonators and presented the network with the same virtual trajectory (Figure 2E). We found that CAN models with resonators were able to reliably and robustly produce grid-patterned neural activity, which were qualitatively (Figure 6A,B) and quantitatively (Figure 6C–F, Figure 6—figure supplement 1) similar to patterns produced by networks of integrator neurons across different values of . Importantly, increase in markedly increased spacing between grid fields (Figure 6D) and their average size (Figure 6E), consequently reducing the number of grid fields within the arena (Figure 6F). It should be noted that the reduction in grid score with increased (Figure 6C) is merely a reflection of the reduction in the number of grid fields within the arena, and not indicative of loss of grid-patterned activity. Although the average firing rates were tuned to be similar across the resonator and the integrator networks (Figure 6—figure supplement 1A), the peak-firing rate in the resonator network was significantly higher (Figure 6—figure supplement 1B). In addition, for both resonator and integrator networks, consistent with increases in grid-field size and spacing, there were marked increases in information rate (Figure 6—figure supplement 1C) and reductions in sparsity (Figure 6—figure supplement 1D), with increase in .
Within our model framework, it is important to emphasize that although increasing reduced in resonator neurons (Figures 5E and 6B), the changes in grid spacing and size are not a result of change in , but a consequence of altered . This inference follows from the observation that altering has qualitatively and quantitatively similar outcomes on grid spacing and size in both integrator and resonator networks (Figure 6). Further confirmation for the absence of a direct role for in regulating grid spacing or size (within our modeling framework) came from the invariance of grid spacing and size to change in , which altered (Figure 7). Specifically, when we fixed , and altered across resonator neurons in the homogeneous CAN model, of individual neurons changed (Figure 7A), but did not markedly change grid field patterns (Figure 7A), grid score, average grid spacing, mean size, number, or sparsity of grid fields (Figure 7B). However, increasing decreased grid-field sizes, the average and the peak firing rates, consequently reducing the information rate in the activity pattern (Figure 7B).
Together, these results demonstrated that homogeneous CAN models with phenomenological resonators reliably and robustly produce grid-patterned neural activity. In these models, the LPF regulated the size and spacing of the grid fields and the HPF governed the magnitude of activity and the suppression of low-frequency inputs.
Phenomenological resonators stabilized the emergence of grid-patterned activity in heterogeneous CAN models
Having incorporated intrinsic resonance into the CAN model, we were now equipped to directly test our hypothesis that the expression of intrinsic neuronal resonance could stabilize grid-like activity patterns in heterogeneous networks. We incorporated heterogeneities, retaining the same forms/degrees of heterogeneities (Table 1; Figure 2A–D) and the same virtual trajectory (Figure 2E), in a CAN model endowed with resonators to obtain the spatial activity maps for individual neurons (Figure 8A). Strikingly, we observed that the presence of resonating neurons in the CAN model stabilized grid-patterned activity in individual neurons, despite the introduction of the highest degrees of all forms of biological heterogeneities (Figure 8A). Importantly, outputs produced by the homogeneous CAN model manifested spatially precise circular grid fields, exhibiting regular triangular firing patterns across trials (e.g., Figure 1A), which constituted forms of precision that are not observed in electrophysiologically obtained grid-field patterns (Fyhn et al., 2004; Hafting et al., 2005). However, that with the incorporation of biological heterogeneities in resonator neuron networks, the imprecise shapes and stable patterns of grid-patterned firing (e.g., Figure 8A) tend closer to those of biologically observed grid cells. Quantitatively, we computed grid-cell measurements across all the 3600 neurons in the network and found that all quantitative measurements (Figure 8B–I, Figure 8—figure supplement 1), including the grid score (Figure 8B) and information rate (Figure 8H), were remarkably robust to the introduction of heterogeneities (cf. Figures 2–3 for CAN models with integrator neurons).
To ensure that the robust emergence of stable grid-patterned activity in heterogeneous CAN models with resonator neurons was not an artifact of the specific network under consideration, we repeated our analyses with two additional sets of integrator-resonator comparisons (Figure 8—figure supplement 2). In these networks, we employed a baseline of either 14 ms (Figure 8—figure supplement 2A) or 8 ms (Figure 8—figure supplement 2B) instead of the 10 ms value employed in the default network (Figures 2 and 8). A pairwise comparison of all grid-cell measurements between networks with integrator vs. resonator neurons demonstrated that the resonator networks were resilient to the introduction of heterogeneities compared to integrator networks (Figure 8—figure supplement 2).
Together, these results demonstrated that phenomenologically incorporating intrinsic resonance into neurons of the CAN model imparts resilience to the network, facilitating the robust emergence of stable grid-cell activity patterns, despite the expression of biological heterogeneities in neuronal and synaptic properties.
Mechanistic model of neuronal intrinsic resonance: incorporating a slow activity-dependent negative feedback loop
In the previous sections, we had phenomenologically introduced intrinsic resonance in rate-based neurons, by incorporating an artificial high-pass filter to the low-pass filtering property of an integrator neuron. Physiologically, however, intrinsic resonance results from the presence of resonating conductances in the neural system, which introduce a slow activity-dependent negative feedback into the neuron. To elaborate, there are two requirements for any conductance to act as a resonating conductance (Hutcheon and Yarom, 2000). First, the current mediated by the conductance should actively suppress any changes in membrane voltage of neurons. In resonating conductances, this is implemented by the voltage-dependent gating properties of the ion channel mediating the resonating conductance whereby any change in membrane voltage (de)activates the channel in such a way that the channel current suppresses the change in membrane voltage. This constitutes an activity-dependent negative feedback loop. The second requirement for a resonating conductance is that it has to be slower compared to the membrane time constant. This requirement ensures that cutoff frequency of the lowpass filter (associated with the membrane time constant) is greater than that of the high-pass filter (mediated by the resonating conductance). Together these two requirements mechanistically translate to a slow activity-dependent negative feedback loop, a standard network motif in biological systems for yielding damped oscillations and resonance (Alon, 2019). In intrinsically resonating neurons, resonance is achieved because the resonating conductance actively suppresses low-frequency signals through this slow negative feedback loop, in conjunction with the suppression of higher frequency signals by the low-pass filter mediated by the membrane time constant. Resonating conductances do not suppress high frequencies because they are slow and would not activate/deactivate with faster high-frequency signals (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008).
In the single-neuron integrator model employed in this study, the LPF is implemented by the integration time constant τ. Inspired by the mechanisms behind the physiological emergence of neuronal intrinsic resonance, we mechanistically incorporated intrinsic resonance into this integrator neuronal model by introducing a slow negative feedback to the single neuron dynamics (Figure 9A). The dynamics of the coupled evolution of neuronal activity () and the feedback state variable () are provided in Equations (11–13). A sigmoidal feedback kernel () regulated the activity dependence, and the feedback time constant () determined the kinetics of the negative feedback loop (Equations 11 and 12). The response of this single-neuronal model, endowed with the slow-negative feedback loop, to a pulse input manifested typical sag associated with the expression of intrinsic resonance (Hutcheon and Yarom, 2000), owing to the slow evolution of the feedback state variable (Figure 9B,C). To directly test whether the introduction of the slow negative feedback loop yielded a resonating neuronal model, we assessed single-neuron output to a chirp stimulus (Figure 9D) and found the manifestation of intrinsic resonance when the feedback loop was activated (Figure 9E). As this resonator neuron model involving a slow negative feedback loop was derived from the mechanistic origins of intrinsic neuronal resonance, we refer to this as the mechanistic resonator model.
Resonance frequency in the mechanistic resonator model was tunable by altering the parameters associated with the feedback loop. Specifically, increasing the half-maximal activity of the feedback kernel () reduced resonance frequency within the tested range of values (Figure 9F), whereas increasing the slope of the feedback kernel () enhanced initially, but reduced with further increases in (Figure 9G). Furthermore, enhancing the strength of the feedback () increased , and is reminiscent of increased with increase in resonating conductance density (Narayanan and Johnston, 2008), the parameter that defines the strength of the negative feedback loop in intrinsic resonating neurons. An important requirement for the expression of resonance through the mechanistic route that we chose is that the feedback time constant () has to be slower than the integration time constant ( = 10 ms). To test this in our model, we altered of the feedback loop while maintaining all other values to be constant (Figure 9I) and found that resonance did not manifest with low values of . With increase in , there was an initial rapid increase in , which then reduced upon further increase in (Figure 9I). It may be noted that low values of translate to a fast negative feedback loop, thus emphasizing the need for a slow negative feedback loop for the expression of resonance. These observations are analogous to the dependence of resonance frequency in intrinsically resonating neurons on the activation time constant of the resonating conductance (Narayanan and Johnston, 2008). Together, we have proposed a tunable mechanistic model for intrinsic resonance in rate-based neurons through the incorporation of a slow negative feedback loop to the neuronal dynamics.
Mechanistic resonators stabilized the emergence of grid-patterned activity in heterogeneous CAN models
How was grid-patterned neural activity in a homogeneous CAN model affected by the expression of intrinsic neuronal resonance through a mechanistic framework? To address this, we replaced all the integrator neurons in the homogeneous CAN model (Figure 2E; homogeneous network) with mechanistic theta-frequency resonator neurons, while maintaining identical connectivity patterns and velocity dependent inputs. We observed that neurons within this homogeneous CAN model built with mechanistic resonators were able to produce distinct grid-patterned neural activity (Figure 10A). Sensitivity analyses showed that grid-field size and spacing were affected by the parameters associated with the slow negative feedback loop, consequently affecting the grid score (Figure 10B) and other measurements associated with the grid-patterned activity (Figure 10—figure supplements 1–2). Within the tested ranges, the strength of feedback () had limited effect on grid-patterned neural activity, whereas the slope () and half-maximal acivity () of the feedback kernel had strong impact on grid-field size and spacing (Figure 10). Although the grid fields were well-defined across all values of and , the grid scores were lower for some cases (e.g., =0.5; =0.1 in Figure 10B) due to the smaller number of grid fields within the arena (Figure 10A). Importantly, the feedback time constant () had very little impact on grid pattern neural activity (Figure 10A) and on all quantified measurements (Figure 10B, Figure 10—figure supplements 1–2). Together, these analyses demonstrated that a homogenous CAN model built with mechanistic resonators yielded stable and robust grid pattern neural activity whose grid field size and spacing can be tuned by parameters of slow negative feedback that governed resonance.
Next, we systematically incorporated the different degrees of heterogeneities into the CAN model with mechanistic resonator neurons to test if resonance could stabilize grid-patterned activity in the heterogeneous CAN models. We employed 5 degrees of all heterogeneities with the same virtual trajectory (Figure 2E) in a CAN model endowed with mechanistic resonators to compute the spatial activity maps for individual neurons (Figure 11A). As with the network built with phenomenological resonators (Figure 8), we observed stable and robust grid-like spatial maps emerge even with highest degree of heterogeneities in the CAN model with mechanistic resonator neurons (Figure 11). Furthermore, compared to homogeneous model, the grid pattern activity obtained by the heterogeneous CAN model closely mimicked the grid-cell activity, endowed with imprecise grid field shapes observed in neurons in MEC region of the brain (Figure 11). Quantitatively, the grid score (Figure 11B), average spacing (Figure 11C), and mean grid field size (Figure 11G) were remarkably robust to the introduction of heterogeneities. However, the average (Figure 11E) and peak firing (Figure 11D) reduced with increase in degree of heterogeneities (Figure 11D) and consequently affected measurements that were dependent on firing rate (i.e., information rate and sparsity; Figure 11H–I). Together, these results demonstrate that intrinsic neuronal resonance, introduced either through phenomenological (Figure 8) or through mechanistic (Figure 11) single-neuron models, yielded stable and robust grid-like pattern in heterogeneous CAN models.
The grid-cell activity on a 2D plane represents a spatially repetitive periodic pattern. Consequently, a phase plane constructed from activity along the diagonal of the 2D plane on one axis (Figure 11—figure supplement 1A), and the spatial derivative of this activity on the other should provide a visual representation of the impact of heterogeneities and resonance on this periodic pattern of activity. To visualize the specific changes in the periodic orbits obtained with homogeneous and heterogeneous networks constructed with integrators or resonators, we plotted the diagonal activity profiles of five randomly picked neurons using the phase plane representation (Figure 11—figure supplement 1B). This visual representation confirmed that homogenous CAN models built of integrators or resonators yielded stable closed orbit trajectories, representing similarly robust grid-patterned periodic activity (Figure 11—figure supplement 1B). The disparate sizes of different orbits are merely a reflection of the intersection of the diagonal with a given grid field. Specifically, if the grid fields were considered to be ideal circles, the orbital size is the largest if diagonal passes through the diameter, with orbital size reducing for any other chord. Upon introduction of heterogeneities, the phase-plane plots of diagonal activity profiles from the heterogeneous integrator network lacked closed orbits (Figure 11—figure supplement 1B). This is consistent with drastic reductions in grid score and information rate for these heterogeneous network models (Figures 2–3). In striking contrast, the phase-plane plots from the heterogeneous resonator network manifested closed orbits even with the highest degree of heterogeneities introduced, irrespective of whether resonance was introduced through a phenomenological or a mechanistic model (Figure 11—figure supplement 1B). Although these phase-plane trajectories were noisy compared to those from the homogeneous resonator network, the presence of closed orbits indicated the manifestation of spatial periodicity in these activity patterns (Figure 11—figure supplement 1B). These observations visually demonstrated that resonator neurons stabilized the heterogeneous network and maintained spatial periodicity in grid-cell activity, irrespective of whether resonance was introduced through a phenomenological or a mechanistic model.
The slow kinetics of the negative feedback loop in mechanistic resonators is a critical requirement for stabilizing heterogeneous CAN models
Our rationale behind the introduction of intrinsic resonance into the neuron was that it would suppress the low-frequency perturbations introduced by biological heterogeneities (Figure 4). Consequently, our hypothesis is that the targeted suppression of low-frequency components resulted in the stabilization of the CAN model. Two lines of evidence for this hypothesis were that the suppression of low-frequency components through phenomenological (Figure 8) or through mechanistic (Figure 11) resonators resulted in stabilization of the heterogeneous CAN models. An advantage of recruiting a mechanistic model for introducing resonance is that sensitivity analyses on its parameters could provide valuable mechanistic insights about how such stabilization is achieved. With reference to specific hypothesis, the ability to tune resonance by altering the feedback time constant (Figure 9I), without altering the feedback kernel or the feedback strength, provides an efficient route to understand the mechanistic origins of the stabilization. Specifically, the value of (default value = 75 ms) governs the slow kinetics of the feedback loop and is the source for the targeted suppression of low-frequency components. Reducing would imply a faster negative feedback loop, thereby suppressing even higher-frequency components. As a further test of our hypothesis on the role of suppressing low-frequency components in stabilizing the CAN network, we asked if mechanistic resonators with lower values of would be able to stabilize heterogeneous CAN models (Figure 12). If fast negative feedback loops (i.e., low values of ) were sufficient to stabilize heterogeneous CAN models, that would counter our hypothesis on the requirement of targeted suppression of low-frequency components. To the contrary, we found that heterogeneous CAN networks with neurons endowed with fast negative feedback loops were incapable of stabilizing the grid-cell network (Figure 12). With progressive increase in , the grid-patterned firing stabilized even for high degrees of heterogeneities (Figure 12B,C for low values of ; Figure 11B for = 75 ms), thus providing a direct additional line of evidence supporting our hypothesis on the need for targeted suppression of low-frequency components in stabilizing the network. We noted that the impact of altering was specifically related to stabilizing heterogeneous networks by suppressing heterogeneities-driven low-frequency perturbations because altering did not alter grid-patterned activity in homogeneous resonator networks (Figure 9A,B). Together, our results provide multiple lines of evidence that the slow kinetics of the negative feedback loop in single-neuron dynamics (Figure 9A) mediates targeted suppression of low-frequency signals (Figure 9D,E), thereby yielding intrinsic neuronal resonance (Figure 9I) and stabilizing grid-patterned activity in heterogeneous CAN models (Figures 11 and 12).
Resonating neurons suppressed the impact of biological heterogeneities on low-frequency components of neural activity
As detailed above, our hypothesis on a potential role for intrinsic neuronal resonance in stabilizing grid-patterned firing in heterogeneous CAN models was centered on the ability of resonators in specifically suppressing low-frequency components. Although our analyses provided evidence for stabilization of grid-patterned firing with phenomenological (Figure 8) or mechanistic (Figure 11) resonators, did resonance specifically suppress the impact of biological heterogeneities of low-frequency components of neural activity? To assess this, we performed frequency-dependent analyses of neural activity in CAN models with phenomenological or mechanistic resonators (Figure 13, Figure 13—figure supplements 1–3) and compared them with integrator-based CAN models (Figure 4). First, we found that the variance in the deviations of neural activity in heterogeneous networks with reference to the respective homogeneous models were considerably lower with resonator neurons (both phenomenological and mechanistic models), compared to their integrator counterparts (Figure 13A,B, Figure 13D,E, Figure 13—figure supplements 1–3). Second, comparing the relative power of neural activity across different octaves, we found that network with resonators suppressed lower frequencies (predominantly the 0–2 Hz band) and enhanced power in the range of neuronal resonance frequencies, when compared with their integrator counterparts (Figure 13C, Figure 13F, Figure 13—figure supplements 1–3). This relative suppression of low-frequency power accompanied by the relative enhancement of high-frequency power was observed across all networks with resonator, either homogeneous (Figure 13—figure supplements 1–2) or heterogeneous with distinct forms of heterogeneities (Figure 13—figure supplement 3G–I). Importantly, given the slow activity-dependent negative feedback loop involved in mechanistic resonators, the low-frequency suppression was found to be extremely effective across all degrees of heterogeneities (Figure 13D,E) with minimal increases of power in high-frequency bands (Figure 13F) compared to their phenomenological counterparts (Figure 13A–C). The phenomenological resonators were built with a simple HPF that is not endowed with activity-dependent filtering capabilities. In addition, the derivative-based implementation of the phenomenological resonator model yielded spurious high-frequency power, which was averted with the slow activity-dependent negative feedback loop incorporated into the mechanistic resonator model (Figure 13D–F).
Together, our results demonstrated that biological heterogeneities predominantly altered low-frequency components of neural activity, and provided strong quantitative lines of evidence that intrinsic neuronal resonance plays a stabilizing role in heterogeneous networks by targeted suppression of low-frequency inputs, thereby counteracting the disruptive impact of biological heterogeneities on low-frequency components.
Discussion
The principal conclusion of this study is that intrinsic neuronal resonance stabilizes heterogeneous 2D CAN models, by suppressing heterogeneity-induced perturbations in low-frequency components of neural activity. Our analyses provided the following lines of evidence in support of this conclusion:
Neural-circuit heterogeneities destabilized grid-patterned activity generation in a 2D CAN model (Figures 2 and 3).
Neural-circuit heterogeneities predominantly introduced perturbations in the low-frequency components of neural activity (Figure 4).
Targeted suppression of low-frequency components through phenomenological (Figure 5C) or through mechanistic (Figure 9D) resonators resulted in stabilization of the heterogeneous CAN models (Figures 8 and 11). Thus, stabilization was achieved irrespective of the means employed to suppress low-frequency components: an activity-independent suppression of low frequencies (Figure 5) or an activity-dependent slow negative feedback loop (Figure 9).
Changing the feedback time constant in mechanistic resonators, without changes to neural gain or the feedback strength allowed us to control the specific range of frequencies that would be suppressed. Our analyses showed that a slow negative feedback loop, which results in targeted suppression of low-frequency components, was essential in stabilizing grid-patterned activity (Figure 12). As the slow negative feedback loop and the resultant suppression of low frequencies mediates intrinsic neuronal resonance, these analyses provide important lines of evidence for the role of targeted suppression of low frequencies in stabilizing grid-patterned activity.
We demonstrate that the incorporation of phenomenological (Figure 13A–C) or mechanistic (Figure 13D–F) resonators specifically suppressed lower frequencies of activity in the 2D CAN model.
A physiological role for intrinsic neuronal resonance in stabilizing heterogeneous neural networks
Intrinsic neuronal resonance is effectuated by the expression of resonating conductances, which are mediated by restorative channels endowed with activation time constants slower than the neuronal membrane time constant (Hutcheon and Yarom, 2000). The gating properties and the kinetics of these ion channels allow them to suppress low-frequency activity with little to no impact on higher frequencies, yielding resonance in conjunction with the LPF governed by the membrane time constant (Hutcheon and Yarom, 2000). This intrinsic form of neuronal resonance, mediated by ion channels that are intrinsic to the neural structure, is dependent on membrane voltage (Narayanan and Johnston, 2007; Narayanan and Johnston, 2008; Hu et al., 2009), somato-dendritic recording location (Narayanan and Johnston, 2007; Narayanan and Johnston, 2008; Hu et al., 2009), neuronal location along the dorso-ventral axis (Giocomo et al., 2007; Garden et al., 2008) and is regulated by activity-dependent plasticity (Brager and Johnston, 2007; Narayanan and Johnston, 2007). Resonating conductances have been implicated in providing frequency selectivity to specific range of frequencies under sub- and supra-threshold conditions (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2007; Hu et al., 2009; Das and Narayanan, 2017; Das et al., 2017), in mediating membrane potential oscillations (Fransén et al., 2004; Mittal and Narayanan, 2018), in mediating coincidence detection through alteration to the class of neural excitability (Das and Narayanan, 2017; Das et al., 2017), in regulating spatio-temporal summation of synaptic inputs (Garden et al., 2008), in introducing phase leads in specific ranges of frequencies regulating temporal relationships of neural responses to oscillatory inputs (Narayanan and Johnston, 2008), in regulating local-field potentials and associated spike phases (Sinha and Narayanan, 2015; Ness et al., 2018), in mediating activity homeostasis through regulation of neural excitability (Brager and Johnston, 2007; Narayanan and Johnston, 2007; Honnuraiah and Narayanan, 2013), in altering synaptic plasticity profiles (Nolan et al., 2007; Narayanan and Johnston, 2010), and in regulating grid-cell scales (Giocomo et al., 2007; Giocomo et al., 2011b; Giocomo et al., 2011a; Pastoll et al., 2012).
Our analyses provide an important additional role for intrinsic resonance in stabilizing heterogeneous neural network, through the suppression of heterogeneity-induced perturbation in low-frequency components. The 1/f characteristics associated with neural activity implies that the power in lower frequencies is higher (Buzsaki, 2006), and our analyses show that the incorporation of biological heterogeneities into networks disrupt their functional outcomes by introducing perturbations predominantly in the lower frequencies. We demonstrate that resonating conductances, through their ability to suppress low-frequency activity, are ideally placed to suppress such low-frequency perturbations thereby stabilizing activity patterns in the face of biological heterogeneities. As biological heterogeneities are ubiquitous, we postulate intrinsic neuronal resonance as a powerful mechanism that lends stability across different heterogeneous networks, through suppression of low-frequency perturbations introduced by the heterogeneities. A corollary to this postulate is that the specific resonance frequency of a given neural structure is a reflection of the need to suppress frequencies where the perturbations are introduced by specific forms and degree of biological heterogeneities expressed in the network where the structure resides.
Within this framework, the dorso-ventral gradient in resonance frequencies in entorhinal stellate cells could be a reflection of the impact of specific forms of heterogeneities expressed along the dorso-ventral axis on specific frequency ranges, with resonance frequency being higher in regions where a larger suppression of low-frequency components is essential. Future electrophysiological and computational studies could specifically test this hypothesis by quantifying the different heterogeneities in these sub-regions, assessing their frequency-dependent impact on neural activity in individual neurons and their networks. More broadly, future studies could directly measure the impact of perturbing resonating conductances on network stability and low-frequency activity in different brain regions to test our predictions on the relationship among biological heterogeneities, intrinsic resonance, and network stability.
Slow negative feedback: stability, noise suppression, and robustness
Our analyses demonstrated the efficacy of a slow negative feedback loop in stabilizing grid-patterned activity in CAN models (Figures 11–12). From a broader perspective, negative feedback is a standard motif for establishing stabilization and robustness of dynamical systems spanning control engineering (Nyquist, 1932; Black, 1934; Bode, 1945; Bechhoefer, 2005; Åström and Murray, 2008) and biological networks (Barkai and Leibler, 1997; Weng et al., 1999; Hutcheon and Yarom, 2000; Bhalla et al., 2002; Milo et al., 2002; Shen-Orr et al., 2002; Tyson et al., 2003; Alon, 2007; Turrigiano, 2007; Novák and Tyson, 2008; Tyson and Novák, 2010). From the perspective of biological systems, the impact of negative feedback on stability, robustness, and homeostasis spans multiple scales, from single molecules dynamics to effective functioning of entire ecosystem. Examples of the stabilizing roles of negative feedback at the organ level include baroreflex in blood pressure regulation (Sved, 2009), body temperature, blood glucose level, endocrine hormone secretion (Modell et al., 2015), and erythropoiesis (Koulnis et al., 2014). Cells are equipped with distinct receptors that can sense changes in temperature, pH, damage to cellular workhorse proteins or DNA, internal state of cells and accumulation of products (Tyson and Novák, 2010). Using functional motifs consisting combinations of positive and negative feedback loops often imparts stability to the biochemical reactions and signaling networks comprising these receptors, thus maintaining homeostasis (Alon, 2007). Specific examples for negative feedback in biochemical reactions and signaling networks include protein synthesis (Goodwin, 1966), mitotic oscillators (Goldbeter, 1991), MAPK signaling pathways (Kholodenko, 2000; Bhalla et al., 2002), cAMP production (Martiel and Goldbeter, 1987), adaptation in bacterial chemotaxis (Knox et al., 1986; Barkai and Leibler, 1997; Spiro et al., 1997; Yi et al., 2000; Levchenko and Iglesias, 2002), and M/G1 module (G1/S and G2/M phases) of the cell cycle control system (Rupes, 2002).
Importantly, with specific relevance to our hypothesis, negative feedback has been shown to reduce the effects of noise and enhance system stability with reference to internal and external perturbations because of the suppressive nature of this motif (Savageau, 1974; Becskei and Serrano, 2000; Thattai and van Oudenaarden, 2001; Austin et al., 2006; Dublanche et al., 2006; Raj and van Oudenaarden, 2008; Lestas et al., 2010; Cheong et al., 2011; Voliotis et al., 2014). In addition, negative feedback alleviates bottlenecks on information transfer (Cheong et al., 2011), and has been proposed as a mechanism for stable alignment of dose-response in the signaling system for mating pheromones in yeast, by adjusting the dose-response of downstream systems to match the dose-response of upper-level systems and simultaneously reducing the amplification of stochastic noise in the system (Yu et al., 2008).
At the cellular scale, the dynamics of action potential, which is a fundamental unit of information transfer in nervous system, is critically dependent on the negative feedback by delayed-rectifier potassium channels (Hodgkin and Huxley, 1952c; Hodgkin and Huxley, 1952b; Hodgkin and Huxley, 1952a). Resonating conductances (or phenomenological inductances) mediate slow negative feedback in sustaining sub-threshold membrane potential oscillations along with amplifying conductances that mediate a fast positive feedback loop (Cole and Baker, 1941; Mauro, 1961; Cole, 1968; Mauro et al., 1970; Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008). A similar combination of positive and negative feedback loop involving synaptic connectivity has been suggested to modulate the neuronal oscillation frequency during sensory processing in neuronal circuit model of layers 2 and 3 of sensory cortex (Lee et al., 2018). Furthermore, positive and negative feedback signals play critical roles in the emergence of neuronal polarity (Takano et al., 2019). Activity-dependent negative feedback mechanisms control the density of ion channels and receptors based on the levels of neural activity, resulting in homeostatic activity regulation (Bienenstock et al., 1982; Turrigiano et al., 1994; Turrigiano et al., 1995; Turrigiano et al., 1998; Desai et al., 1999; Turrigiano, 2007; O'Leary et al., 2010; O'Leary and Wyllie, 2011; Honnuraiah and Narayanan, 2013; O'Leary et al., 2014; Srikanth and Narayanan, 2015). Microglia have been shown to stabilize neuronal activity through a negative feedback loop that is dependent on extracellular ATP concentration, which is dependent on neural activity (Badimon et al., 2020). Models for neuronal networks have successfully utilized the negative feedback loop for achieving transfer of decorrelated inputs of olfactory information in the paleocortex (Ambros-Ingerson et al., 1990), in the reduction of redundancy in information transfer in the visual pathway (Pece, 1992), in a model of the LGN inhibited by the V1 to achieve specificity (Murphy and Sillito, 1987), and in a model of cerebellum based on feedback motifs (D'Angelo et al., 2016).
Finally, an important distinction between the phenomenological and the mechanistic resonators is that the former is activity independent, whereas the latter is activity dependent in terms of their ability to suppress low-frequency signals. This distinction explains the differences between how these two resonators act on homogeneous and heterogeneous CAN networks, especially in terms of suppressing low-frequency power (Figure 13, Figure 13—figure supplements 1–3). Together, the incorporation of resonance through a negative feedback loop allowed us to link our analyses to the well-established role of network motifs involving negative feedback loops in inducing stability and suppressing external/internal noise in engineering and biological systems. We envisage intrinsic neuronal resonance as a cellular-scale activity-dependent negative feedback mechanism, a specific instance of a well-established network motif that effectuates stability and suppresses perturbations across different networks.
Future directions and considerations in model interpretation
Our analyses here employed a rate-based CAN model for the generation of grid-patterned activity. Rate-based models are inherently limited in their ability to assess temporal relationships between spike timings, which are important from the temporal coding perspective where spike timings with reference to extracellular oscillations have been shown to carry spatial information within grid fields (Hafting et al., 2008). Additionally, the CAN model is one of the several theoretical and computational frameworks that have been proposed for the emergence of grid-cell activity patterns, and there are lines of experimental evidence that support aspects of these distinct models (Kropff and Treves, 2008; Burak and Fiete, 2009; Burgess and O'Keefe, 2011; Giocomo et al., 2011a; Navratilova et al., 2012; Couey et al., 2013; Domnisoru et al., 2013; Schmidt-Hieber and Häusser, 2013; Yoon et al., 2013; Schmidt-Hieber et al., 2017; Urdapilleta et al., 2017; Stella et al., 2020; Tukker et al., 2021). Within some of these frameworks, resonating conductances have been postulated to play specific roles, distinct from the one proposed in our study, in the emergence of grid-patterned activity and the regulation of their properties (Giocomo et al., 2007; Giocomo et al., 2011b; Giocomo et al., 2011a; Pastoll et al., 2012). Together, the use of the rate-based CAN model has limitations in terms of assessing temporal relationships between oscillations and spike timings, and in deciphering the other potential roles of resonating conductances in grid-patterned firing. However, models that employ other theoretical frameworks do not explicitly incorporate the several heterogeneities in afferent, intrinsic, and synaptic properties of biological networks, including those in the conductance and gating properties of resonating conductances. Future studies should therefore explore the role of resonating conductances in stabilizing conductance-based grid-cell networks that are endowed with all forms of biological heterogeneities. Such conductance-based analyses should also systematically assess the impact of resonating conductances, their kinetics, and gating properties (including associated heterogeneities) in regulating temporal relationships of spike timings with theta-frequency oscillations spanning the different theoretical frameworks.
A further direction for future studies could be the use of morphologically realistic conductance-based model neurons, which would enable the incorporation of the distinct ion channels and receptors distributed across the somato-dendritic arborization. Such models could assess the role of interactions among several somato-dendritic conductances, especially with resonating conductances, in regulating grid-patterned activity generation (Burgess and O'Keefe, 2011; Giocomo et al., 2011a). In addition, computations performed by such a morphologically realistic conductance-based neuron are more complex than the simplification of neural computation as an integrator or a resonator (Schmidt-Hieber et al., 2017). For instance, owing to the differential distribution of ionic conductances, different parts of the neurons could exhibit integrator- or resonator-like characteristics, with interactions among different compartments yielding the final outcome (Das and Narayanan, 2017; Das et al., 2017). The conclusions of our study emphasizing the importance of biological heterogeneities and resonating conductances in grid-cell models underline the need for heterogeneous, morphologically realistic conductance-based network models to systematically compare different theoretical frameworks for grid-cell emergence. Future studies should endeavor to build such complex heterogeneous networks, endowed with synaptic and channel noise, in systematically assessing the role of heterogeneities and specific ion-channel conductances in the emergence of grid-patterned neural activity across different theoretical frameworks.
In summary, our analyses demonstrated that incorporation of different forms of biological heterogeneities disrupted network functions through perturbations that were predominantly in the lower frequency components. We showed that intrinsic neuronal resonance, a mechanism that suppressed low-frequency activity, stabilized network function. As biological heterogeneities are ubiquitous and as the dominance of low-frequency perturbations is pervasive across biological networks (Hausdorff and Peng, 1996; Gilden, 2001; Gisiger, 2001; Ward, 2001; Buzsaki, 2006), we postulate that mechanisms that suppress low-frequency components could be a generalized route to stabilize heterogeneous biological networks.
Materials and methods
Development of a virtual trajectory to reduce computational cost
Request a detailed protocolWe developed the following algorithm to yield faster virtual trajectories in either a circular (2 m diameter) or a square (2 m × 2 m) arena:
The starting location of the virtual animal was set at the center of the arena ( = 1, = 1).
At each time step (=1 ms), two random numbers were picked, one denoting distance from a uniform distribution () and another yielding the angle of movement from another uniform distribution (). The angle of movement was restricted to within on either side to yield smooth movements within the spatial arena. The new coordinate of the animal was then updated as:
- (1)
- (2)
If the new location (, ) fell outside the arena, the and are repicked until (, ) were inside the bounds of the arena.
To enable sharp turns near the boundaries of the arena, the random variable was picked from a uniform distribution of () instead of uniform distribution of () if either or was close to arena boundaries. This enhanced range for closed to the boundaries ensured that there was enough deflection in the trajectory to mimic sharp turns in animal runs in open arena boundaries.
The limited range of in step two ensured that the head direction and velocity inputs to the neurons in the CAN model were not changing drastically at every time step of the simulation run, thereby stabilizing spatial activity. We found the virtual trajectories yielded by this algorithm to closely mimic animal runs in an open arena, with better control over simulation periods and better computational efficiency in terms of covering the entire arena within shorter time duration.
Structure and dynamics of the continuous attractor network model with integrator neurons
Request a detailed protocolEach neuron in the network has a preferred direction (assumed to receive input from specific head direction cells), which can take its value to be among 0, π/2, π, and 3π/2, respectively, depicting east, north, west, and south. The network was divided into local blocks of four cells representing each of these four directions and local blocks were uniformly tiled to span the entire network. This organization translated to a scenario where one-fourth of the neurons in the network were endowed with inputs that had the same direction preference. Of the two sets of synaptic inputs to neurons in the network, intra-network inputs followed a Mexican-hat connectivity pattern. The recurrent weights matrix for Mexican-hat connectivity was achieved through the difference of Gaussian equation, given by Burak and Fiete, 2009:
where represented the synaptic weight from neuron (located at ) to neuron (located at ), defined the unit vector pointing along the θj direction. This weight matrix was endowed with a center-shifted center-surround connectivity, and the parameter (default value: 2) defined the amount of shift along . In the difference of Gaussians formulation in Equation 6, the parameter regulated the sign of the synaptic weights and was set to 1, defining an all-inhibitory network. The other parameters were γ = 1.1 × β with β = 3/λ2, and λ (default value: 13) defining the periodicity of the lattice (Burak and Fiete, 2009).
The second set of synaptic inputs to individual neurons, arriving as feed-forward inputs based on the velocity of the animal and the preferred direction of the neuron, was computed as:
where denotes a gain parameter for the velocity inputs (velocity scaling factor), represents the velocity vector derived from the trajectory of the virtual animal.
The dynamics of the rate-based integrator neurons, driven by these two sets of inputs was then computed as:
where represented the neural transfer function, which was implemented as a simple rectification non-linearity, and denoted the activity of neuron at time point . The default value of the integration time constant (τ) of neural response was 10 ms. CAN models were initialized with randomized values of () for all neurons. For stable spontaneous pattern to emerge over the neural lattice, an initial constant feed-forward drive was provided by ensuring the velocity input was zero for the initial 100 ms period. The time step (dt) for numerical integration was 0.5 ms when we employed the real trajectory and 1 ms for simulations with virtual trajectories. We confirmed that the use of 1 ms time steps for virtual trajectories did not hamper accuracy of the outcomes. Activity patterns of all neurons were recorded for each time step. For visualization of the results, the spike threshold was set at 0.1 (a.u.).
Incorporating biological heterogeneities into the CAN model
Request a detailed protocolWe introduced intrinsic, afferent, and synaptic forms of biological heterogeneities by independently randomizing the values of integration time constant (τ), velocity scaling factor (), and the connectivity matrix (), respectively, across neurons in the CAN model. Specifically, in the homogeneous CAN model, these parameters were set to a default value and were identical across all neurons in the network. However, in heterogeneous networks, each neuron in the network was assigned a different value for these parameters, each picked from respective uniform distributions (Table 1). We progressively expanded the ranges of the respective uniform distributions to progressively increase the degree of heterogeneity (Table 1). We built CAN models with four different forms of heterogeneities: networks that were endowed with one of intrinsic, afferent, and synaptic forms of heterogeneities, and networks that expressed all forms together. In networks that expressed only one form of heterogeneity, the other two parameters were set identical across all neurons. In networks expressing all forms of heterogeneities, all three parameters were randomized with the span of the uniform distribution for each parameter concurrently increasing with the degree of heterogeneity (Table 1). We simulated different trials of a CAN model by employing different sets of initial randomization of activity values () for all the cells, while keeping all other model parameters (including the connectivity matrix, trajectory, and the specific instance of heterogeneities) unchanged (Figure 3—figure supplement 1).
Introducing resonance in rate-based neurons: phenomenological model
Request a detailed protocolTo assess the frequency-dependent response properties, we employed a chirp stimulus , defined a constant-amplitude sinusoidal input with its frequency linearly increasing as a function of time, spanning 0–100 Hz in 100 s. We fed as the input to a rate-based model neuron and recorded the response of the neuron:
We computed the Fourier transform of the response as and employed the magnitude of to evaluate the frequency-dependent response properties of the neuron. Expectedly, the integrator neuron acted as a LPF (Figure 5A,B).
A simple means to elicit resonance from the response of a low-pass system is to feed the output of the low-pass system to a HPF, and the interaction between these filters results in resonance (Hutcheon and Yarom, 2000; Narayanan and Johnston, 2008). We tested this employing the stimulus, by using the low-pass response to the chirp stimulus from equation 7:
Here, represented the output of the resultant resonator neuron, defined an exponent that regulates the slope of the frequency-selective response properties of the high-pass filter. When , trivially falls back to the low-pass response . The magnitude of the Fourier transform of , manifested resonance in response to the stimulus (Figure 5A,C). This model for achieving resonance in single neurons was referred to as a phenomenological resonator.
Having confirmed that the incorporation of a HPF would yield resonating dynamics, we employed this formulation to define the dynamics of a resonator neuron in the CAN model through the combination of the existing low-pass kinetics (Equation 6) and the high-pass kinetics. Specifically, we obtained for each neuron in the CAN model from Equation 6 and redefined neural activity as the product of this (from Equation 6) and its derivative raised to an exponent:
where R was a scaling factor for matching the response of resonator neurons with integrator neurons and defined the exponent of the high-pass filter. Together, whereas the frequency-dependent response of the integrator is controlled by integration time constant (), that of a resonator is dependent on of the integrator as well as the HPF exponent (Figure 5D,E).
To simulate homogeneous CAN models with resonator neurons, all integrator neurons in the standard homogeneous model (Equations 3–6) were replaced with resonator neurons. Intrinsic, synaptic, and afferent heterogeneities are introduced as previously described (Table 1) to build heterogeneous CAN models with resonator neurons. The other processes, including initialization procedure of the resonator neuron network, were identical to the integrator neuron network. In simulations where was changed from its base value of 10 ms, the span of uniform distributions that defined the five degrees of heterogeneities were appropriately rescaled and centered at the new value of . For instance, when was set to 8 ms, the spans of the uniform distribution for the first and the fifth degrees of heterogeneity were 6.4–9.6 ms (20% of the base value on either side) and 1–16 ms (100% of the base value on either side, with an absolute cutoff at 1 ms), respectively.
Mechanistic model for introducing intrinsic resonance in rate-based neurons
Request a detailed protocolNeuronal intrinsic resonance was achieved by incorporating a slow negative feedback to the single-neuronal dynamics of rate-based neurons (Figure 9A). We tested the emergence of intrinsic resonance using the stimulus described earlier (Equation 7; Figure 9D–E). The dynamics of mechanistic model of resonance as follows with the stimulus:
Here, governed neuronal activity, defined the feedback state variable, and (default value 0.015) represented feedback strength. The slow kinetics of the negative feedback was controlled by the feedback time constant () with default value of 75 ms. In order to manifest resonance,. The steady-state feedback kernel () of the negative feedback is sigmoidally dependent on the output of the neuron (), with default value of the half-maximal activity () to be 0.3 and the slope () to be 0.1:
The magnitude of the Fourier transform of in this system of differential equations, was assessed for the expression of resonance in response to the stimulus (Figure 9D). This model for achieving resonance in single neurons was referred to as a mechanistic resonator.
These resonating neurons were incorporated within the CAN framework to assess how neuronal intrinsic resonance achieved through mechanistic means affected the grid-cell firing. The synaptic weight matrix (Equation 4) as well as the velocity dependence (Equation 5) associated with CAN model consisting of resonator neurons were identical to CAN model with integrator neurons, with the only difference in the single-neuronal dynamics:
with evolving as per Equation (11), implemented the activity-dependent slow negative feedback which is dependent on the current state of the neuron.
Quantitative analysis of grid-cell activity
Request a detailed protocolTo quantitatively assess the impact of heterogeneities and the introduction of resonance into the CAN neurons, we employed standard measurements of grid-cell activity (Fyhn et al., 2004; Hafting et al., 2005).
We divided the space of the open arena into 100 × 100 pixels to compute the rate maps of grid-cell activity in the network. Activity spatial maps were constructed for each cell by taking the activity ( for cell ) of the cell at each time stamp and synchronizing the index of corresponding location of the virtual animal in the open arena, for the entire duration of the simulation. Occupancy spatial maps were constructed by computing the probability () of finding the rat in th pixel, employing the relative time spent by the animal in that pixel across the total simulation period. Spatial rate maps for each cell in the network were computed by normalizing their activity spatial maps by the occupancy spatial map for that run. Spatial rate maps were smoothened using a 2D Gaussian kernel with standard deviation () of 2 pixels (e.g., Figure 1A, panels in the bottom row for each value of ). Average firing rate (μ) of grid cells was computed by summing the activity of all the pixels from rate map matrix and dividing this quantity by the total number of pixels in the rate map (N = 10,000). Peak firing rate () of a neuron was defined as the highest activity value observed across all pixels of its spatial rate map.
For estimation of grid fields, we first detected all local maxima in the 2D-smoothened version of the spatial rate maps of all neurons in the CAN model. Individual grid-firing fields were identified as contiguous regions around the local maxima, spatially spreading to a minimum of 20% activity relative to the activity at the location of the local maxima. The number of grid fields corresponding to each grid cell was calculated as the total number of local peaks detected in the spatial rate map of the cell. The mean size of the grid fields for a specific cell was estimated by calculating the ratio between the total number of pixels covered by all grid fields and the number of grid fields. The average grid-field spacing for individual grid cells was computed as the ratio of the sum of distances between the local peaks of all grid fields in the rate map to the total number of distance values.
Grid score was computed by assessing rotational symmetry in the spatial rate map. Specifically, for each neuron in the CAN model, the spatial autocorrelation value, , was computed between its spatial rate map and the map rotated by , for different values of (30, 60, 90, 120, or 150). These values were used for calculating the grid score for the given cell as:
Spatial information rate () in bits per second was calculated by:
where defined the probability of finding the rat in th pixel, represented the mean firing rate of the grid cell in the th pixel, and µ denoted the average firing rate of grid cell. Sparsity was computed as the ratio between the square mean rate and mean square rate:
Quantitative analysis of grid-cell temporal activity in the spectral domain
Request a detailed protocolTo understand the impact of network heterogeneities on spectral properties of grid-cell activities under the CAN framework, we used the Fourier transform of the temporal activity of all the grid cells in a network. First, we assessed the difference in the magnitude spectra of temporal activity of the grid cells (n = 3600) in the homogeneous network compared to the corresponding grid cells in the heterogeneous networks (e.g., Figure 4A). Next, we normalized this difference in magnitude spectra for each grid cell with respect to the sum of their respective maximum magnitude for the homogeneous and the heterogeneous networks. Quantitatively, if and defined neuronal activity in spectral domain for a neuron in a heterogeneous network and for the same neuron in the corresponding homogeneous network, respectively, then the normalized difference was computed as:
Note that this normalization was essential to account for potential differences in the maximum values of and . Finally, we computed the variance of this normalized difference () across all the cells in the networks (e.g., Figure 4B).
In addition to using these normalized differences for quantifying spectral signatures of neural activity, we performed octave analysis on the magnitude spectra of the temporal activity of the grid cells to confirm the impact of heterogeneities or resonating neurons on different frequency octaves. Specifically, we computed the percentage of area under the curve (AUC) for each octave (0–2 Hz, 2–4 Hz, 4–8 Hz, and 8–16 Hz) from the magnitude spectra (Figure 13—figure supplement 1B,C). We performed a similar octave analysis on the variance of normalized difference for networks endowed with integrator or resonator neurons (Figure 13B, Figure 13—figure supplement 3D–F).
Computational details
Request a detailed protocolGrid-cell network simulations were performed in MATLAB 2018a (Mathworks Inc, USA) with a simulation step size of 1 ms, unless otherwise specified. All data analyses and plotting were performed using custom-written software within the IGOR Pro (Wavemetrics, USA) or MATLAB environments, and all statistical analyses were performed using the R statistical package (http://www.R-project.org/). To avoid false interpretations and to emphasize heterogeneities in simulation outcomes, the entire range of measurements are reported in figures rather than providing only the summary statistics (Rathour and Narayanan, 2019).
Data availability
All data generated or analyzed during this study are included in the manuscript and supporting files. The custom-written simulations and analyses code (in MATLAB) employed for simulations are available as source code.
References
-
Network motifs: theory and experimental approachesNature Reviews Genetics 8:450–461.https://doi.org/10.1038/nrg2102
-
BookAn Introduction to Systems Biology: Design Principles of Biological CircuitsBoca Raton, Florida: Chapman and Hall/CRC Press.https://doi.org/10.1016/j.mbs.2008.07.002
-
BookFeedback SystemsNew Jersey: Princeton University Press.https://doi.org/10.1134/S0005117920030091
-
Variability and robustness in biomolecular systemsMolecular Cell 28:755–760.https://doi.org/10.1016/j.molcel.2007.11.013
-
Feedback for physicists: a tutorial essay on controlReviews of Modern Physics 77:783–836.https://doi.org/10.1103/RevModPhys.77.783
-
Stabilized feedback amplifiers*Bell System Technical Journal 13:1–18.https://doi.org/10.1002/j.1538-7305.1934.tb00652.x
-
Accurate path integration in continuous attractor network models of grid cellsPLOS Computational Biology 5:e1000291.https://doi.org/10.1371/journal.pcbi.1000291
-
Models of place and grid cell firing and theta rhythmicityCurrent Opinion in Neurobiology 21:734–744.https://doi.org/10.1016/j.conb.2011.07.002
-
BookRhythms of the BrainNew York: Oxford University Press.https://doi.org/10.1093/acprof:oso/9780195301069.001.0001
-
BookMembranes, Ions and Impulses: A Chapter of Classical BiophysicsBerkeley: University of California Press.https://doi.org/10.1126/science.163.3864.268
-
Longitudinal impedance of the squid giant axonJournal of General Physiology 24:771–788.https://doi.org/10.1085/jgp.24.6.771
-
Recurrent inhibitory circuitry as a mechanism for grid formationNature Neuroscience 16:318–324.https://doi.org/10.1038/nn.3310
-
Modeling the cerebellar microcircuit: new strategies for a Long-Standing issueFrontiers in Cellular Neuroscience 10:176.https://doi.org/10.3389/fncel.2016.00176
-
Strings on a violin: location dependence of frequency tuning in active dendritesFrontiers in Cellular Neuroscience 11:72.https://doi.org/10.3389/fncel.2017.00072
-
Plasticity in the intrinsic excitability of cortical pyramidal neuronsNature Neuroscience 2:515–520.https://doi.org/10.1038/9165
-
Noise in transcription negative feedback loops: simulation and experimental analysisMolecular Systems Biology 2:41.https://doi.org/10.1038/msb4100081
-
Pervasive robustness in biological systemsNature Reviews Genetics 16:483–496.https://doi.org/10.1038/nrg3949
-
Cognitive emissions of 1/f noisePsychological Review 108:33–56.https://doi.org/10.1037/0033-295X.108.1.33
-
Scale invariance in biology: coincidence or footprint of a universal mechanism?Biological Reviews of the Cambridge Philosophical Society 76:161–209.https://doi.org/10.1017/S1464793101005607
-
Multiscaled randomness: a possible source of 1/f noise in biologyPhysical Review. E, Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 54:2154–2157.https://doi.org/10.1103/physreve.54.2154
-
Currents carried by sodium and potassium ions through the membrane of the giant axon of LoligoThe Journal of Physiology 116:449–472.https://doi.org/10.1113/jphysiol.1952.sp004717
-
The components of membrane conductance in the giant axon of LoligoThe Journal of Physiology 116:473–496.https://doi.org/10.1113/jphysiol.1952.sp004718
-
Resonance, oscillation and the intrinsic frequency preferences of neuronsTrends in Neurosciences 23:216–222.https://doi.org/10.1016/S0166-2236(00)01547-2
-
Negative feedback and ultrasensitivity can bring about oscillations in the mitogen-activated protein kinase cascadesEuropean Journal of Biochemistry 267:1583–1588.https://doi.org/10.1046/j.1432-1327.2000.01197.x
-
Towards a theory of biological robustnessMolecular Systems Biology 3:137.https://doi.org/10.1038/msb4100179
-
Attractor dynamics of spatially correlated neural activity in the limbic systemAnnual Review of Neuroscience 35:267–285.https://doi.org/10.1146/annurev-neuro-062111-150351
-
Erythropoiesis: from molecular pathways to system propertiesAdvances in Experimental Medicine and Biology 844:37–58.https://doi.org/10.1007/978-1-4939-2095-2_3
-
Variability, compensation and homeostasis in neuron and network functionNature Reviews Neuroscience 7:563–574.https://doi.org/10.1038/nrn1949
-
Subthreshold behavior and phenomenological impedance of the squid giant axonJournal of General Physiology 55:497–523.https://doi.org/10.1085/jgp.55.4.497
-
A physiologist's view of homeostasisAdvances in Physiology Education 39:259–266.https://doi.org/10.1152/advan.00107.2015
-
h-Type membrane current shapes the local field potential from populations of pyramidal neuronsThe Journal of Neuroscience 38:6011–6024.https://doi.org/10.1523/JNEUROSCI.3278-17.2018
-
Design principles of biochemical oscillatorsNature Reviews Molecular Cell Biology 9:981–991.https://doi.org/10.1038/nrm2530
-
Regeneration theoryBell System Technical Journal 11:126–147.https://doi.org/10.1002/j.1538-7305.1932.tb02344.x
-
Neuronal homeostasis: time for a change?The Journal of Physiology 589:4811–4826.https://doi.org/10.1113/jphysiol.2011.210179
-
BookRedundancy reduction of a Gabor representation: a possible computational role for feedback from primary visual cortex to lateral geniculate nucleusIn: Aleksander I, Taylor J, editors. Artificial Neural Networks. Amsterdam: North-Holland. pp. 865–868.https://doi.org/10.1007/s00422-007-0171-3
-
Degeneracy in hippocampal physiology and plasticityHippocampus 29:980–1022.https://doi.org/10.1002/hipo.23139
-
Checking cell size in yeastTrends in Genetics 18:479–485.https://doi.org/10.1016/S0168-9525(02)02745-2
-
Path integration and cognitive mapping in a continuous attractor neural network modelThe Journal of Neuroscience 17:5900–5920.https://doi.org/10.1523/JNEUROSCI.17-15-05900.1997
-
Active dendritic integration as a mechanism for robust and precise grid cell firingNature Neuroscience 20:1114–1121.https://doi.org/10.1038/nn.4582
-
Cellular mechanisms of spatial navigation in the medial entorhinal cortexNature Neuroscience 16:325–331.https://doi.org/10.1038/nn.3340
-
BookBlood Pressure: BaroreceptorsIn: Squire L. R, editors. Encyclopedia of Neuroscience. Oxford: Academic Press. pp. 259–264.https://doi.org/10.1007/978-3-540-29678-2
-
Neuronal polarity: positive and negative feedback signalsFrontiers in Cell and Developmental Biology 7:69.https://doi.org/10.3389/fcell.2019.00069
-
Resonant interneurons can increase robustness of gamma oscillationsThe Journal of Neuroscience 35:15682–15695.https://doi.org/10.1523/JNEUROSCI.2601-15.2015
-
Microcircuits for spatial coding in the medial entorhinal cortexPhysiological Reviews 99:00042.2020.https://doi.org/10.1152/physrev.00042.2020
-
Selective regulation of current densities underlies spontaneous changes in the activity of cultured neuronsThe Journal of Neuroscience 15:3640–3652.https://doi.org/10.1523/JNEUROSCI.15-05-03640.1995
-
Homeostatic signaling: the positive side of negative feedbackCurrent Opinion in Neurobiology 17:318–324.https://doi.org/10.1016/j.conb.2007.04.004
-
Hebb and homeostasis in neuronal plasticityCurrent Opinion in Neurobiology 10:358–364.https://doi.org/10.1016/S0959-4388(00)00091-X
-
Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cellCurrent Opinion in Cell Biology 15:221–231.https://doi.org/10.1016/S0955-0674(03)00017-6
-
Functional motifs in biochemical reaction networksAnnual Review of Physical Chemistry 61:219–240.https://doi.org/10.1146/annurev.physchem.012809.103457
-
Selforganization of modular activity of grid cellsHippocampus 27:1204–1213.https://doi.org/10.1002/hipo.22765
-
Specific evidence of low-dimensional continuous attractor dynamics in grid cellsNature Neuroscience 16:1077–1084.https://doi.org/10.1038/nn.3450
Article and author information
Author details
Funding
Wellcome Trust DBT India Alliance (IA/S/16/2/502727)
- Rishikesh Narayanan
Human Frontier Science Program (Career development award)
- Rishikesh Narayanan
Department of Biotechnology , Ministry of Science and Technology (DBT-IISc partnership Program)
- Rishikesh Narayanan
Indian Institute of Science (Revati and Satya Nadham Atluri Chair Professorship)
- Rishikesh Narayanan
Ministry of Human Resource Development (Scholarship funds)
- Divyansh Mittal
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
The authors thank Dr. Poonam Mishra, Dr. Sufyan Ashhad, and the members of the cellular neurophysiology laboratory for helpful discussions and for comments on a draft of this manuscript. The authors thank Dr. Ila Fiete for helpful discussions. This work was supported by the Wellcome Trust-DBT India Alliance (Senior fellowship to RN; IA/S/16/2/502727), Human Frontier Science Program (HFSP) Organization (RN), the Department of Biotechnology through the DBT-IISc partnership program (RN), the Revati and Satya Nadham Atluri Chair Professorship (RN), and the Ministry of Human Resource Development (RN and DM).
Copyright
© 2021, Mittal and Narayanan
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,790
- views
-
- 118
- downloads
-
- 10
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Cell Biology
- Neuroscience
The assembly and maintenance of neural circuits is crucial for proper brain function. Although the assembly of brain circuits has been extensively studied, much less is understood about the mechanisms controlling their maintenance as animals mature. In the olfactory system, the axons of olfactory sensory neurons (OSNs) expressing the same odor receptor converge into discrete synaptic structures of the olfactory bulb (OB) called glomeruli, forming a stereotypic odor map. The OB projection neurons, called mitral and tufted cells (M/Ts), have a single dendrite that branches into a single glomerulus, where they make synapses with OSNs. We used a genetic method to progressively eliminate the vast majority of M/T cells in early postnatal mice, and observed that the assembly of the OB bulb circuits proceeded normally. However, as the animals became adults the apical dendrite of remaining M/Ts grew multiple branches that innervated several glomeruli, and OSNs expressing single odor receptors projected their axons into multiple glomeruli, disrupting the olfactory sensory map. Moreover, ablating the M/Ts in adult animals also resulted in similar structural changes in the projections of remaining M/Ts and axons from OSNs. Interestingly, the ability of these mice to detect odors was relatively preserved despite only having 1–5% of projection neurons transmitting odorant information to the brain, and having highly disrupted circuits in the OB. These results indicate that a reduced number of projection neurons does not affect the normal assembly of the olfactory circuit, but induces structural instability of the olfactory circuitry of adult animals.
-
- Neuroscience
Specialized chemosensory signals elicit innate social behaviors in individuals of several vertebrate species, a process that is mediated via the accessory olfactory system (AOS). The AOS comprising the peripheral sensory vomeronasal organ has evolved elaborate molecular and cellular mechanisms to detect chemo signals. To gain insight into the cell types, developmental gene expression patterns, and functional differences amongst neurons, we performed single-cell transcriptomics of the mouse vomeronasal sensory epithelium. Our analysis reveals diverse cell types with gene expression patterns specific to each, which we made available as a searchable web resource accessed from https://www.scvnoexplorer.com. Pseudo-time developmental analysis indicates that neurons originating from common progenitors diverge in their gene expression during maturation with transient and persistent transcription factor expression at critical branch points. Comparative analysis across two of the major neuronal subtypes that express divergent GPCR families and the G-protein subunits Gnai2 or Gnao1, reveals significantly higher expression of endoplasmic reticulum (ER) associated genes within Gnao1 neurons. In addition, differences in ER content and prevalence of cubic membrane ER ultrastructure revealed by electron microscopy, indicate fundamental differences in ER function.