Abstract
Although grid cells are one of the most well studied functional classes of neurons in the mammalian brain, the assumption that there is a single grid orientation and spacing per grid module has not been carefully tested. We investigate and analyze a recent large-scale recording of medial entorhinal cortex to characterize the presence and degree of heterogeneity of grid properties within individual modules. We find evidence for small, but robust, variability and hypothesize that this property of the grid code could enhance the ability of encoding local spatial information. Performing analysis on synthetic populations of grid cells, where we have complete control over the amount heterogeneity in grid properties, we demonstrate that variability, of a similar magnitude to the analyzed data, leads to significantly decreased decoding error, even when restricted to activity from a single module. Our results highlight how the heterogeneity of the neural response properties may benefit coding and opens new directions for theoretical and experimental analysis of grid cells.
The discovery of grid cells in medial entorhinal cortex (MEC) [1] has led to considerable experimental and computational work aimed at identifying their origin [2–12] and their function [13–21]. The organization of grid cells into discrete modules [22, 23], with particular grid properties (grid spacing and orientation) conserved within module, but not between modules, has fundamentally shaped this research. For instance, the increasing size and spacing of grid modules along the dorsal-ventral axis of MEC, by discontinuous jumps of a near constant ratio, has been argued to be optimal for encoding local spatial information [24, 25], when grid cell activity across all modules is integrated together [24–28]. This is despite the fact that the hippocampus, a downstream target of the MEC, receives inputs from only a portion of the dorsal-ventral axis [29]. The modularity of the grid system has also been proposed to simplify the wiring necessary for generating continuous attractor dynamics [2, 3, 5, 16], a computational mechanism theorized to underlie grid cells function that enjoys considerable experimental support [23, 30–34].
Much of this prior theoretical work has made the additional assumption that grid cell properties are identical, up to a phase shift, within a single module. However, the distributions of measured orientation and spacing show non-zero variability [22, 30, 34]. This could be due to finite recording time, neurophysiological noise, or the sensitivity of numerical methods used to fit grid properties. Alternatively, this variability could reflect underlying inhomogeneity, at a fine scale, within modules. Despite the fundamental way in which grid cell function has been informed by their modular organization, to the best of our knowledge, no characterization of the degree and robustness of variability in grid properties within individual modules has been performed.
If robust variability of grid properties does exist, then it is possible that this heterogeneity could be exploited to encode additional information about local space, reducing the requirement of integration across multiple grid modules (Fig. 1). In particular, when all grid cells have the same grid orientation and spacing, the joint population activity has a translational invariance that persists, even as larger populations of grid cells are considered (Fig. 1, “Fixed grid properties within module”). In contrast, if grid cells within a module have variability in their grid orientation and spacing, then – over a finite area – the translational invariance of the population activity is broken, and distinct patterns emerge in distinct locations of space (Fig. 1, “Variable grid properties within module”). As an example, the blue and green grid cells in Fig. 1 show the most overlap in the upper left half of the arena (denoted by cyan pixels), while the red and blue grid cells show the most overlap in the lower right portion of the arena (denoted by purple pixels). This is despite the fact that the variability in spacing and orientation is only on the order of a few centimeters and degrees, respectively.
In this paper, we perform detailed analysis of recent state-of-the-art MEC electrophysiological recordings, which were made publicly available [34]. We characterize the variability of grid orientation and spacing within individual modules, and find evidence for small, but robust, variability. This variability is present whether a single value of grid orientation and spacing is assigned to each grid cell, or whether grid distortion [35, 36] is taken into account by considering three grid orientations and spacings independently. To assess the functional implications of this heterogeneity, we perform simulation experiments with synthetic, noisy grid cell populations, where we have complete control over the distribution of grid orientation and spacing. We find that the variability in grid cell orientation and spacing, at a similar degree as present in the data we analyze, leads to lower decoding error of local space when using the activity of a single module.
Taken together, our results challenge a long-held assumption and support a growing understanding of the spatial information encoded by grid cell populations [37–40]. Additionally, they encourage consideration of the broader benefits that multiple modules may provide, beyond the encoding of local space [41–43].
Results
Robust differences in grid cell properties within individual modules
To determine the extent to which variability of grid properties in individual modules exists, and to what extent this variability is a robust property of the grid code, we analyzed previously published MEC recordings [34], which include tens to hundreds of grid cells simultaneously recorded. This allows us to characterize the distribution of grid properties within a single grid module, to an extent not possible with other data sets.
For each grid cell, we compute the grid spacing (λ) and orientation (θ) by measuring properties associated with the six hexagonally distributed peaks of the spatial autocorrelogram (SAC), as traditionally performed [44] (Fig. 2A; see Materials and Methods). For clarity, we begin by focusing on a single module, recorded from an open field environment (recording identifier: Rat R, Day 1, Module 2 – R12). This module was picked for its long recording time (approximately 130 minutes of recorded activity, as compared to the other open field recordings that have 30 minutes or less of recorded activity) and for its large number of simultaneously recorded putative grid cells (N = 168). In this module, we find that grid cells with high grid scores (> 0.85; N = 74) have a range of grid orientation and spacing (example cells shown in Fig. 2B; distributions across module shown in Fig. 2E, F), with λ ranging from 65 cm to 90 cm and θ ranging from 2° to 9°. Overlaying the SAC of pairs of grid cells with similar grid spacing and different grid orientation (Fig. 2C) or vice versa (Fig. 2D) enables visualization of the extent of this variability.
Because individual grid fields can be cut-off by the boundaries of the environment, it is possible that computing the grid spacing from the SAC (which considers all grid fields) could lead to an under-estimate of λ, for some grid cells. To verify that the width of the distribution of grid spacing, within the same module, that we see is not due to the specifics of the SAC, we re-computed the grid spacing for all grid cells directly from their rate maps, considering only the three grid fields closest to the center of the environment (Fig. S2A; see Materials and Methods). We find that, while the grid spacing estimated from the SAC tends to be larger than the grid spacing estimated from the rate maps, a broad range of λ is again present (Fig. S2B, C).
To assess whether the heterogeneity of grid properties present in a single grid module is a robust feature or attributable to noise (either in the recording or the grid property fitting procedure), we measure the variability in grid orientation and spacing within a single grid cell and between pairs of grid cells. If the heterogeneity is explainable by noise, then we expect that the within-cell variability will be of the same size as the between-cell variability. In contrast, if the heterogeneity is a robust feature of the grid code, then we expect the within-cell variability will be significantly smaller than the between-cell variability.
To measure the within- and between-cell variability, we split the recording into evenly spaced 30 second bins, randomly assigning each temporal bin to one of two equal length halves and recomputing the grid properties for each half of the recording (Fig. 3A; see Materials and Methods). We set inclusion criteria to filter out cells that did not have consistent hexagonally structured SACs across splits of the recording (see Materials and Methods). While these requirements are strict (see Fig. S1, for percent of cells rejected), they set a conservative estimate on the amount of grid property variability, ensuring that we did not artificially inflate the variability due to weak grid cells. We do not find that the length of the temporal bin used to split the data has a large impact on the percent of cells accepted by this criteria (Fig. S3; see Materials and Methods).
We find that the distribution of within-cell variability of grid orientation and spacing is more concentrated around 0 than the between-cell variability, across all 100 random shuffles of the data into two halves. Comparing the average within- and between-cell variability of grid spacing and orientation reveals that nearly all of the grid cells that passed the criteria for inclusion (N = 82) exhibit more between-cell than within-cell variability (Fig. 3D, E): 95.1% grid cells for orientation ( denotes mean across cells) and 100% of grid cells for spacing .A Wilcoxon-Signed-Rank Test indicates that between-cell variability is significantly higher than within-cell variability for both orientation and spacing (Δθ: p < 0.001, Δλ : p < 0.001).
In keeping with convention, the reported grid cell properties are the average of those computed for each of the three independent axes in the SAC (Axis 1: aligned to ≈ 0°; Axis 2: aligned to ≈ 60°; Axis 3: aligned to ≈ −60° [45]). To ensure that this averaging is not contributing to the greater between-cell variability, we repeated the analysis above, restricting ourselves to each axis separately. The results again demonstrate that grid properties are significantly more robust within-cell than between-cell (Fig. 4A, B). For each axis, the average between-cell variability for every cell was significantly higher than the average within-cell variability (Fig. 4C, D), as reported by the Wilcoxon-Signed-Rank Test for orientation (p < 0.001, for all three axes) and for spacing (p < 0.001, for all three axes)
Having demonstrated that grid cell properties are robustly heterogeneous in a single module, we proceed to analyze the remaining recordings in the data set (N = 420 cells across 8 modules; one module has no cells that pass our criteria) [34]. Although there are differences across recordings, we find larger between-than within-cell variability for grid orientation and spacing is present across all recordings (Fig. 5A, B; 80.0% of grid cells have greater between-than within-variability for orientation, and of grid cells have greater between-than within-variability for spacing, and ). A Wilcoxon-Signed-Rank Test finds that these differences are significant (Δθ: p < 0.001; Δλ : p < 0.001). We do not find evidence suggesting that the within-cell variability is influenced by grid score (Fig. 5C, D; linear regression for θ: R2 = 0.03, p = 0.55 Wald Test; linear regression for λ: R2 = 0.07, p = 0.14 Wald Test).
Taken together, our analysis of large-scale MEC recordings demonstrates that grid cells in the same grid module do not have a single grid spacing and orientation, but instead have a restricted range of values around the module mean. This property is a robust feature that cannot be explained by noise from the recording or fitting procedures.
Variability in grid properties within single modules improves the encoding of local space
The variability of grid cell properties within individual grid modules, while statistically significant, is small in magnitude. Can a computational benefit in the encoding of local space be gained from this level of inhomogeneity? How sensitive might such a computational benefit be to the exact amount of variability present?
To address these questions, we generate populations of synthetic grid cells (see Materials and Methods), where we have complete control over the number of grid cells and their firing properties. For simplicity, we assume that all grid cells in our population have grid orientation and spacing sampled from Gaussian distributions, with means and and standard deviations σλ and σθ, respectively. Assigning each grid cell in our population a grid orientation and spacing, we are able to generate “ideal” rate maps [14]. Sampling from a Poisson process on these ideal rate maps, we generate noisy grid cell rate maps (Fig. 6A). Leveraging a simple linear decoder (see Materials and Methods), we can examine how decoding error of local space is affected by the number of grid cells in the population and the amount of variability (σλ and σθ).
We begin by investigating the decoding capabilities of a synthetic module with properties similar to that of the experimentally recorded module that was analyzed in detail (recording identifier R12; Figs. 2, 3). We therefore set and . To determine an appropriate value for σλ and σθ, we subtract the mean between-cell variability by the mean within-cell variability ( and ) We interpret these values as the amount of variability in grid spacing and orientation that is not due to noise. Therefore, we set σθ = 1° and σλ = 5 cm. This amount of variability leads to similar sampled distributions of λ and θ as was found in the real data (compare Fig. 2E, F with Fig. 6B, C).
Applying the linear decoder to populations of synthetic grid cells, we find that decoding error decreases as the number of grid cells is increased (Fig. 6D, blue line). When N = 1024, the decoding error is ≈ 20 cm, substantially better than random decoding (Fig. 6D, dashed black line). As expected, this result is not seen in a synthetic population of grid cells with no variability in grid properties (σθ = 0° and σλ = 0 cm) (Fig. 6D, black line). In particular, while the restricted number of grid firing fields enables a decoding error smaller than chance, the population with fixed grid properties exhibits little change in decoding error with increasing numbers of grid cells. This demonstrates that the improved encoding is specific to populations with inhomogeneity in their grid spacing and orientation.
To validate that the synthetic population was a reasonable surrogate to the experimentally recorded data, we perform the same decoding analysis on grid cells from recording ID R12 (see Materials and Methods). As the real data is limited in the number of cells, we are only able to compare up to populations of N = 64 (Fig. 6E). However, in this restricted range, we find agreement between the synthetic population with grid property variability and the recorded data (Fig. 6E, compare red and blue lines). This strengthens our hypothesis that the observed amounts of inhomogeneity in grid spacing and orientation can lead to improved decoding of local space.
To determine the extent to which the decrease in decoding error depends the exact variability of grid spacing and orientation, we perform a grid search over 36 pairs of (σλ, σθ) values in [0°, 1°, …, 5°] × [0 cm, 1 cm, …, 5 cm]. As expected, when the variability of both grid properties is large, we find nearly 0 decoding error of local space (Fig. 6E, bottom right). However, there is additionally a range of σλ and σθ values that lead to improved decoding, including values smaller than used (Fig. 6E, above and to the left of the white star). We perform the same grid search on synthetic populations of grid cells with different spacings and (Fig. S4A, B), finding it again possible for the decoding error to drop well below that of the fixed population, if a sufficient amount of variability exists. For the synthetic module with smaller spacing , the existence of more grid fields in the 1.5 m × 1.5 m arena leads to greater complexity of interference patterns (Fig. 1), enabling the same amount of variability in grid spacing and orientation to lead to a sharper decrease in decoding error. Additionally, for the synthetic module with smaller spacing, a given value of σλ is a larger percentage of , as compared to the synthetic module with larger spacing.
Finally, we consider how decoding within a single module compares to decoding across two modules. When the two modules are consecutive (e.g., modules 2 and 3), their grid spacing, λ2 and λ3, has been experimentally found to be related via [22]. In such a case, decoding activity from populations with fixed grid properties leads to nearly 0 error (Fig. S5A, black line). The addition of variability in grid properties does not significantly change the behavior of the decoding (Fig. S5A, blue line). This suggests that small amounts of inhomogeneity may not disrupt the previously achieved theoretical bounds on decoding from multiple grid modules [24, 25, 28, 46]. However, if the two modules being decoded from are non-consecutive (e.g., modules 1 and 3) then the grid spacing can be related by an integer multiple, λ3 = 2λ1. In such a setting, the grid fields of the larger module are a subset of the grid fields of the smaller module, up to a rotation (due to the difference in orientation between the two modules), and we again find that variability in grid properties can improve decoding accuracy (Fig. S5B, compare black and blue lines). Taken together, these results suggest that individual grid modules can exhibit significant encoding of local space via heterogeneity in their grid properties, even when the extent of the variability in θ and λ is similar to that found in the analysis of the experimental recordings. This benefit can also improve encoding in cases when multiple, non-consecutive modules are considered.
Discussion
The multiple firing fields of grid cells, organized along a triangular lattice, has been historically interpreted as a limiting feature for encoding of local space. Particularly influential in shaping this view has been the discovery of the distribution of grid cells into distinct modules, with grid cell spacing (λ) and orientation (θ) preserved within, but not across, modules [22, 23], making integration across multiple modules necessary for spatial information to be decoded [24–28]. While evidence for discontinuity in the grid cell properties across modules is strong, the corollary assumption, that within-module values of λ and θ are identical (within the bounds of noise), has not been systematically studied.
Analyzing recently collected MEC recordings, we found the range of λ and θ values was large, with examples of grid cell pairs in the same module having over 7° difference in grid orientation and 20 cm difference in grid spacing (Fig. 2). Statistical analysis shows that the variability is more robust than expected from noise, for the majority of grid cells (Fig. 5). This was despite the fact that we used a very conservative criteria for assessing whether a grid cell was robust enough to include in the analysis.
Our comparison of within- and between-cell grid property variability was key to our argument, as it was for previous work using it to study the robustness of differences in peak grid field firing rates [39]. The absence of its use in the characterization of distribution of λ and θ [30] may be why the heterogeneity was not identified until now. We find that our analysis is robust to choices of parameters (Figs. S1, S3) and whether we treat each grid field independently or take the average across grid fields (Figs. 3, 4). This challenges the commonly held belief that the variability observed in the grid orientation and spacing is attributable solely to measurement noise.
We hypothesize that this variability may be used to increase the fidelity at which individual grid modules can encode local space. This idea is consistent with a large body of literature showing that heterogeneity in the responses of populations of neurons increases the robustness of encoding [47–50]. We indeed find, in noisy synthetic models of grid cells, that a level of variability in grid properties similar to what is quantified in the real data can be sufficient to accurately decode information of local space (Fig. 6). This benefit is increased with larger numbers of grid cells in the population (Fig. 6D), and is observed over a range of underlying variability values (Fig. 6F). We find that the improvement was most pronounced in modules with small grid spacing (Fig. S4A), although larger modules can see a decrease in decoding error for amounts of variability consistent with was was found in the analyzed data (Fig. S4B).
We note that our results are additionally aligned with recent findings of heterogeneity in maximum firing rates across individual grid fields [37–39] and grid sheering [35, 36, 40]. Our work further demonstrates that, even in the absence of these perturbations, individual grid modules may encode considerably more local spatial information than previously believed.
Finally, models of the formation of orientation maps in visual cortex have demonstrated that slight angular offsets of retinal mosaics, along which retinal receptive fields are organized, can generate complex patterns [51], similar to those found in visual cortex orientation maps [52]. Our results indicate that grid cells in MEC may be capable of leveraging a similar computational principle, suggesting that mosaic patterns might be a broadly utilized feature in neural coding [53].
Limitations
While the data set we analyzed [34] represents an advance in the ability to simultaneously record from tens to hundreds of putative grid cells, across grid modules, the MEC remains a challenging brain region to access for large-scale neurophysiological experiments. Indeed, with our conservative inclusion criteria, we were ultimately limited by having only 420 grid cells included in our analysis. Future work can perform more detailed and complete characterizations of grid property heterogeneity, as new neurotechnologies that enable larger yield of grid cells are developed [54, 55].
Our decoding analysis, while demonstrating the possibility that variability in grid properties can be leveraged by single grid modules to enhance the encoding of local spatial information, made several simplifying assumptions: Poisson sampling for spike rates; linear decoding; normal distribution of grid properties. While comparison to real data showed that these assumptions are reasonable (Fig. 6E), future work can assess the extent to which these restrictions can be lifted, while still enabling individual grid modules to have low decoding error.
Open questions
The heterogeneity in grid properties we characterize motivates the investigation of several new lines of research. Because these are directions that we believe to be fruitful for the field as a whole, we outline them below, with our hypotheses for possible answers.
Q1: What causes grid property variability?
A natural first direction to address is identifying the source of the heterogeneity in grid cell properties we observe. One hypothesis is that this could be driven by “defects” in the specific connectivity pattern that is needed for a continuous attractor. However, whether stable path integration could occur, in the face of such defects, is unclear. As an alternative, the coupling between hippocampus and MEC, which has been shown to lead to variability in grid field firing rates (as experimentally observed) [39, 56], may lead to differences in grid orientation and spacing. In particular, a previous computational model that learned grid cells from place cell input, using non-negative principal component analysis, has shown that place field width affects grid cell orientation and spacing [6]. Heterogeneity in the spatial coding properties of place cells that has been found along the transverse axis of CA3 [57–59], suggesting there may be a systematic differences in the place field widths of hippocampal inputs to MEC grid cells. Further, it was shown that this place-to-grid cell computational model has a linear relationship between place field width and grid spacing, and a non-monotonic relationship between place field width and grid orientation [6]. These may explain why we find stronger average variability in grid spacing than grid orientation (Fig. 5A, B).
Q2: How does grid property variability affect continuous attractor network structure?
Continuous attractor network models of grid cells [2, 3, 5, 16] enjoy considerable experimental support [23, 30–34], making them one of the “canonical” models in neuroscience. However, these models make use of the assumption that all the grid cells in a given module have the same grid orientation and spacing to simplify the network connectivity. In particular, by assuming equal grid orientation and spacing, it becomes possible to arrange grid cells in a two-dimensional space spanned by their phases (i.e., a “neural sheet” [16]). Neurons close in this space are wired with excitatory connections and neurons far in this space are wired with inhibitory connections. As the data set we analyzed was found to provide strong support for the basic predictions of continuous attractor networks (i.e., toroidal topology of the activity manifold) [34], we do not view our results as directly challenging these models. However, considering how these models can accommodate the observed variability is an important future direction. It is possible that these modifications are learnable, via some plasticity mechanism. These solutions may lead to geometric effects on the underlying manifold, which could be explored by using computational tools [60], as well as affect the dynamical properties of the neural population activity, which could be quantified by data-driven dynamical systems methods [61–63]. We hypothesize that the degree in variability of grid spacing and orientation may strike a balance between being small enough to keep the continuous attractor network structure stable, but large enough to enable encoding of local information of space.
Q3: How does grid property variability shape hippocampal representations?
The projections from MEC to hippocampus suggest that the variability in grid properties may influence hippocampal representations (even if grid cells do not comprise the majority of its inputs). We consider two possible ways in which this may happen. First, given that grid cells have been reported to maintain their grid spacing and orientation across exposures to new environments, while undergoing a change in their grid phase [30, 64], the integration across multiple modules has been necessary to explain place field remapping. However, grid phase plays an important role in generating the specific complex interference patterns that emerge when considering the joint activity of grid cells with variable grid properties (Fig. 1). Thus the reported changes in phase may be sufficient to generate large (and seemingly random) changes in local spatial information conveyed by grid cells to hippocampal cells. This could drive additional changes in the local spatial information projected to hippocampus, as well as explain the significant differences in correlation structure between CA1 neurons across different environments [65].
And second, recent work on hippocampal place field drift [66–70] has demonstrated that there is a significant change in the place field location across time, especially in CA1. One possible source of this phenomenon is the reported instability of dendritic spines on the apical dendrites of CA1 place cells [59, 71– 73], ostensibly leading to changes in the MEC inputs to these neurons. However, if grid cells across multiple modules are necessary for local spatial information, turnover in synaptic input is unlikely to cause large changes in the spatial preferences of CA1 neurons, as integration over several scales should provide stable encoding properties. In contrast, different subpopulations of grid cells with variable grid properties can lead to differences in the local spatial information encoded by their joint activity, even if they come from a single module, possibly influencing the spatial preferences of CA1 place cells.
Q4: Why are there multiple modules?
Given that our results demonstrate the ability of single grid modules to encode information about local space – a feat previously believed to be possible only if activity from multiple grid modules was integrated together – why is does MEC have multiple modules? While the variability removes the necessity for encoding local space with multiple modules, higher fidelity representations is achievable by integrating across multiple modules (Fig. S5). In addition, the use of grid cells beyond spatial navigation [74–79], where hierarchical representations are important [42, 43], may be a sufficient implicit bias for the formation of multiple modules [80]. In particular, encoding information at multiple distinct scales is critical for multi-scale reasoning, a cognitive function grid cells may support.
Materials and Methods
Electrophysiology recordings
The neural activity analyzed in this paper comes from a publicly available data set, which has previously been described in detail [34]. We provide brief summary of the methodology and the experimental paradigms used during the recordings.
Three male rats (Long Evans – Rats Q, R, and S) were implanted with Neuropixels silicon probes [81, 82]. These probes were targeted at the MEC-parasubiculum region and the surgery was performed as described previously [32, 82]. After three hours of recovery, recordings were performed.
In the recordings analyzed, the rats foraged for randomly dispersed corn puffs in a 1.5 × 1.5 m2 square open field arena, with walls of height 50 cm. The rats were familiar with the environment and task, having trained 10–20 times prior to the implantation. The rats were food restricted to motivate their foraging, being kept at a minimum of 90% of their original body weight (300–500 grams).
All procedures in the original study were approved by the Norwegian Food and Safety Authority and done in accordance with the Norwegian Animal Welfare Act and the European Convention for the Protection of Vertebrate Animals used for Experimental and Other Scientific Purposes.
Electrophysiology post-processing
The neural activity analyzed in this paper was post-processed, before made publicly available. We describe, in brief, the post-processing performed [34], as well as the post-processing we performed on the downloaded data.
Spike sorting, via KiloSort 2.5 [82], was applied to the data recorded from the Neuropixel probes. Individual units were deemed putative cells if their average spike rate was in the range of 0.5–10Hz, and 99% of their interspike intervals were greater than 2 ms.
For each putative cell, rate maps were constructed by averaging the activity at binned spatial positions in the open field arena. This raw rate map was smoothed, using a Gaussian kernel. The autocorrelation of these rate maps were computed, and a grid score calculated, as described previously [44]. To determine which putative cells were grid cells, and to which module they belonged, autocorrelograms of raw rate maps were constructed, with some of the bins corresponding to the closest and furthest distances removed. These “cropped” rate maps were then clustered by using UMAP [83] to project the data onto a two-dimensional subspace. All the cells in the non-largest cluster were found to have similar grid spacing and orientation, as well as high grid scores. They were therefore deemed putative grid cells. All cells in these clusters had their spike trains placed in the public repository from which we downloaded the data.
From the downloaded spike trains, we constructed rate maps and autocorrelograms in a similar manner, using code made publicly available [8] (Fig. 2).
Computing grid score, orientation and spacing from the spatial autocorrelogram
Grid spacing and grid orientation were computed according to standard methods described in detail previously [22, 84]. Briefly, the goal of the procedure is to identify the location of the six nearest fields in the spatial autocorrelogram (SAC) (Fig. 2A). This was achieved by performing the following steps. First, the SAC was smoothed using a 2D Gaussian filter with σ = 1. Second, the center of the SAC was excluded by applying a mask. Because the size of the SAC’s central peak changes for every module, the radius of such mask was re-computed for each module. Third, we thresholded the SAC by applying an extended-maxima transform ndimage.maximum_filter. Fourth, we identified the center of every field by using the function scipy.stats.find_peaks.
Once the peaks of every field had been found, we computed the location of every peak in polar coordinates. We then selected the 6 peaks that were closest to the center of the SAC, based on the computed radial components. Because every SAC is symmetric, we considered for further analysis the 3 peaks closest to the X axis in angular distance (Axis 1, 2, and 3 [45]). Grid spacing was computed as the arithmetic mean of the radial component of the 3 peaks (except when each peak was analyzed separately – Fig. 4). Given that the SAC dimensions are twice of that of the real arena, we multiplied the SAC radial mean by a factor of 2. Grid orientation was computed as the angular mean of orientations (relative to the x-axis) of the three peaks (except when each peak was analyzed separately – Fig. 4).
In order to ensure that subsequent analysis was performed only on cells whose SAC’s could be well described by hexagonal structure, we imposed the following constraints: 1) the relative angle between two peaks could not be < 30°; 2) the relative angle between two peaks could not be > 90°; 3) the values for grid spacing between the peaks could not be substantially different (the ratio of spacings between any two peaks must be > 0.5 and < 2). Cells that did not meet these criteria were determined to have “Poor grid fit” and were rejected from all subsequent analysis. The percentage of all cells that were removed by this inclusion criteria is shown in Fig. S1. We note that cells from one of the nine modules in the publicly available data set that we analyzed had 98.4% of cells rejected by this criteria (recording identifier – R23). We therefore did not include it in any subsequent analysis.
To compute the grid score of recorded MEC cells, we made use of previously published code [8], that is based on metrics that have become standards in quantifying grid cell properties[44].
For analysis of the distribution of grid properties (Fig. 2E, F), we included only grid cells with grid scores greater than 0.85. This was done to demonstrate that variability was present even in cells that exhibit robust grid cell properties. In the subsequent analyses, an alternative criteria is used, which considers the reliability of the grid responses (see below).
Computing spacing from the rate maps
In order to characterize the spacing with a method that is less susceptible to the effects of having grid fields at the boundaries of the environment (as the SAC method could be), we compute the spacing using the rate maps of individual grid cells (Fig S2A). Once the rate map was computed, we smoothed it with a Gaussian filter, setting σ = 1.5. Then, using the function scikit-image.feature.peak_local_max we extract the position of the center of each firing field. Because our goal with this analysis is to show that the fields in the border do not impact our analysis with the SAC, we restrict ourselves to the 3 peaks that are closest to the center of the arena (Fig S2A). We compute the distances between those three peaks to each other and report the spacing as the average of the distances measured.
Within and between cells splits
To characterize the within- and between-cell variability of grid spacing and orientation, we employed the following approach. First, we split the data into bins of fixed length (30 seconds). From this, we randomly assigned each interval to one of two blocks (denoted as blocks A and B), with exactly half the total number of intervals in each block. For each grid cell, we computed the grid spacing [and] and orientation [ and], from the data in each block. The within-cell differences of grid spacing and orientation was determined as and . Each grid cell’s properties were also compared those of another grid cell, with the match being made using random sampling without replacement. These comparisons were determined as and . This process was repeated 100 times (referring to each iteration as a “shuffle”), per recording. Examples of splits of the data, for different shuffles, is schematically illustrated in Fig. 3A). The resulting distributions of Δθwithin, Δλwithin, Δθbetween, and Δλbetween, across grid cells and splits of the data were then compared. If the within-cell variability in grid spacing and orientation was smaller than between-cell variability, we concluded that the variability of grid cell properties was a robust feature of the data and not due to noise.
When performing this shuffle analysis, we found that some cells, despite having a good grid fit when all the data was considered, did not have SACs that were well described by hexagonal structure when the data was split in half. We viewed this a manifestation of unreliable grid coding and a possible confound in our quantification of variability. As such, we introduced a new inclusion criteria (replacing that of requiring a grid score of > 0.85), only considering cells that had poor grid fits on < 5% of all shuffles. The percentage of all cells that were removed by this inclusion criteria is shown in Fig. S1. In general, we found that cells with high grid score were reliable, although there were exceptions. Additionally, we found that size of the bin used for splitting the data did not significantly affect the percent of cells with good grid fits (passed the prior inclusion criteria) that were considered reliable (Fig. S3).
Synthetic grid cells
To study how variability in grid cell properties might endow the grid code with computational advantages, we generated synthetic grid cell rate maps, so that we could have complete control over the distribution of their properties. These synthetic grid cell rate maps were constructed as follows.
First, the lengths of each dimension the “arena” within which the simulated grid cells exist (Lx and Ly) were set. Then, for N ∈ ℕ grid cells, the grid spacing and orientation were sampled via λ(i) ∼ 𝒩 (µλ, σλ) and θ(i) ∼ 𝒩 (µθ, σθ) where 𝒩 (µ, σ) is a normal distribution with mean µ and variance σ2. Grid phase was sampled as ϕ(i) 𝒰 ([0, Lx] × [0, Ly]), where ϕ is a two dimensional vector, with first component uniformly sampled from [0, Lx] and second component uniformly sampled from [0, Ly]. To construct a population with no variability in grid properties (to use as a control), we set σλ = 0 m. and σθ = 0°.
For each grid cell, we generated idealized grid responses by summing three two-dimensional sinusoids [14], such that the activity at x = (x, y) ∈ [−Lx/2, Lx/2] ×[−Ly/2, Ly/2] is given by
where is the maximal firing rate, R(θi) is the two-dimensional rotation matrix, with rotation θi, and kj are the wave vectors with 0°, 60° and 120° angular differences [i.e.,. To match the recorded neural data, where individual grid cells have distinct maximal firing rates,. We enforced to be within [2, 30], by setting any sampled values outside of this range to the boundary values (i.e., 2 or 30).
To determine the extent to which local spatial information can be decoded from the activity of populations of grid cells with different degrees of variability in their grid properties, we performed the following analysis.
We generated noisy synthetic spike rates of N grid cells by assuming a Poisson process and sampling using the idealized rate maps (Eq. 1). More concretely, the activity of grid cell i at position (x, y) was assumed to be a random variable with a Poisson distribution, whose mean was X(i)(x, y). Thus, the probability of observing spikes, at position (x, y), is given by
Linear decoding synthetic data
For a given resolution of the arena, we generated , where j = 1, …, 10. That is, we constructed 10 noisy rate maps. We performed cross-validated decoding by averaging across 9 of the 10 rate maps, to get an average rate map . For sake of simplicity, consider . To decode local position from the held out noisy rate map [e.g.], we multiplied the activity at each position by all the positions in the average rate map, taking the sum and assigning the decoded position as that with the largest value. The Euclidean distance between the decoded position and the true position is considered the error. This was performed 10 times (holding out each rate map once) and the average error across all positions in the environment was then averaged across all 10 of the validation splits.
Linear decoding experimental data
To decode the electrophysiological data [34], we sampled subpopulations of grid cells from the N = 82 units (recording ID R12) that passed the criteria described previously. For each subpopulation, we split the recorded activity into 10 evenly spaced temporal bins, and constructed average ratemaps for each bin. This is consistent with what was done in decoding the synthetic data. All ratemaps, for each cell, were normalized by the maximal activity. We then randomly chose 5 of the 10 time bins to serve as the “train” data, the remaining 5 served as the “test” data. The rate maps were averaged and then the linear decoder was applied. We sampled 10 different choices of splitting the data and 25 choices of subpopulation.
Acknowledgements
We thank the members of the Goard Lab, Francisco Acosta, Spencer Smith, Caleb Kemere, and the DYNS graduate students for useful discussions surrounding this work. We thank Andreas Herz for suggesting the analysis present in Fig. S2. This work was supported by grants to M.J.G. from NIH (R01 NS121919), NSF (1934288), and the Whitehall Foundation, and grants to X.X.W. from NSF (2318065).
Supplemental material
References
- [1]Microstructure of a spatial map in the entorhinal cortexNature 436:801–806
- [2]A model of grid cells based on a path integration mechanismArtificial Neural Networks–ICANN 2006: 16th International Conference, Athens, Greece, September 10-14, 2006 Springer :740–749
- [3]A spin glass model of path integration in rat medial entorhinal cortexJournal of Neuroscience 26:4266–4276
- [4]Scale-invariant memory representations emerge from moire interference between grid fields that produce theta oscillations: a computational modelJournal of Neuroscience 27:3211–3229
- [5]Recurrent inhibitory circuitry as a mechanism for grid formationNature neuroscience 16:318–324
- [6]Extracting grid cell characteristics from place cell inputs using non-negative principal component analysisElife 5
- [7]Emergence of grid-like representations by training recurrent neural networks to perform spatial localizationInternational Conference on Learning Representations
- [8]Vector-based navigation using grid-like representations in artificial agentsNature 557:429–433
- [9]Learning place cells, grid cells and invariances with excitatory and inhibitory plasticityElife 7
- [10]A unified theory for the origin of grid cells through the lens of pattern formationAdvances in neural information processing systems 32
- [11]From smooth cortical gradients to discrete modules: spontaneous and topologically robust emergence of modularity in grid cellsbioRxiv :2021–10
- [12]A unified theory for the computational and mechanistic origins of grid cellsNeuron
- [13]Path integration and the neural basis of the’cognitive map’Nature Reviews Neuroscience 7:663–678
- [14]From grid cells to place cells: a mathematical modelHippocampus 16:1026–1031
- [15]Entorhinal cortex grid cells can map to hippocampal place cells by competitive learningNetwork: Computation in Neural Systems 17:447–465
- [16]Accurate path integration in continuous attractor network models of grid cellsPLoS computational biology 5
- [17]The input–output transformation of the hippocampal granule cells: from grid cells to place fieldsJournal of Neuroscience 29:7504–7512
- [18]Do the spatial frequencies of grid cells mold the firing fields of place cells?Proceedings of the National Academy of Sciences 112:3860–3861
- [19]Using grid cells for navigationNeuron 87:507–520
- [20]Place field expansion after focal mec inactivations is consistent with loss of fourier components and path integrator gain reductionProceedings of the National Academy of Sciences 112:4116–4121
- [21]Grid scale drives the scale and long-term stability of place mapsNature neuroscience 21:270–282
- [22]The entorhinal grid map is discretizedNature 492:72–78
- [23]A map-like micro-organization of grid cells in the medial entorhinal cortexCell 175:736–750
- [24]A principle of economy predicts the functional architecture of grid cellsElife 4
- [25]Connecting multiple spatial scales to decode the population activity of grid cellsScience Advances 1
- [26]What grid cells convey about rat locationJournal of Neuroscience 28:6858–6871
- [27]Grid cells generate an analog error-correcting code for singularly precise neural computationNature neuroscience 14:1330–1337
- [28]Optimal population codes for space: grid cells outperform place cellsNeural computation 24:2280–2317
- [29]The anatomy of memory: an interactive overview of the parahippocampal–hippocampal networkNature reviews neuroscience 10:272–282
- [30]Specific evidence of low-dimensional continuous attractor dynamics in grid cellsNature neuroscience 16:1077–1084
- [31]Correlations and functional connections in a population of grid cellsPLoS computational biology 11
- [32]Correlation structure of grid cells is preserved during sleepNature neuroscience 22:598–608
- [33]Grid cell co-activity patterns during sleep reflect spatial overlap of grid fields during active behaviorsNature neuroscience 22:609–617
- [34]Toroidal topology of population activity in grid cellsNature 602:123–128
- [35]Fragmentation of grid cell maps in a multicompartment environmentNature neuroscience 12:1325–1332
- [36]Grid cell symmetry is shaped by environmental geometryNature 518:232–235
- [37]Grid and nongrid cells in medial entorhinal cortex represent spatial location and environmental features with complementary coding schemesNeuron 94:83–92
- [38]Grid cells encode local positional informationCurrent Biology 27:2337–2343
- [39]Grid cells show field-to-field variability and this explains the aperiodic response of inhibitory interneuronsarXiv
- [40]Are grid cells used for navigation? on local metrics, subjective spaces, and black holesNeuron
- [41]A framework for intelligence and cortical function based on grid cells in the neocortexFrontiers in neural circuits 12
- [42]Efficient and flexible representation of higher-dimensional cognitive variables with grid cellsPLoS computational biology 16
- [43]The grid code for ordered experienceNature Reviews Neuroscience 22:637–649
- [44]Conjunctive representation of position, direction, and velocity in entorhinal cortexScience 312:758–762
- [45]Shearing-induced asymmetry in entorhinal grid cellsNature 518:207–212
- [46]Resolution of nested neuronal representations can be exponential in the number of neuronsPhysical review letters 109
- [47]Implications of neuronal diversity on population codingNeural computation 18:1951–1986
- [48]Efficient coding in heterogeneous neuronal populationsProceedings of the National Academy of Sciences 105:16344–16349
- [49]Computational implications of biophysical diversity and multiple timescales in neurons and synapses for circuit performanceCurrent opinion in neurobiology 37:44–52
- [50]Neural heterogeneity promotes robust learningNature communications 12
- [51]Retinal origin of orientation maps in visual cortexNature neuroscience 14:919–925
- [52]Voltage-sensitive dyes reveal a modular organization in monkey striate cortexNature 321:579–585
- [53]Life imitates op artNature Neuroscience 14:803–804
- [54]Cellular resolution optical access to brain regions in fissures: imaging medial prefrontal cortex and grid cells in entorhinal cortexProceedings of the National Academy of Sciences 111:18739–18744
- [55]Large-scale two-photon calcium imaging in freely moving miceCell 185:1240–1256
- [56]A theory of joint attractor dynamics in the hippocampus and the entorhinal cortex accounts for artificial remapping and grid cell field-to-field variabilityElife 9
- [57]Neural population evidence of functional heterogeneity along the ca3 transverse axis: pattern completion versus pattern separationNeuron 87:1093–1105
- [58]Topography of place maps along the ca3-to-ca2 axis of the hippocampusNeuron 87:1078–1092
- [59]Long-term transverse imaging of the hippocampus with glass microperiscopesElife 11
- [60]Quantifying extrinsic curvature in neural manifoldsProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition :610–619
- [61]William T Redman, Maria Fonoberova, Ryan Mohr, Ioannis G Kevrekidis, and Igor Mezic. Algorithmic (semi-) conjugacy via koopman operator theory. 2022 IEEE 61st Conference on Decision and Control (CDC), pages 6006–6011. IEEE, 2022.2022 IEEE 61st Conference on Decision and Control (CDC) IEEE :6006–6011
- [62]On equivalent optimization of machine learning methodsarXiv
- [63]Beyond geometry: Comparing the temporal structure of computation in neural circuits with dynamical similarity analysisAdvances in Neural Information Processing Systems 36
- [64]Hippocampal remapping and grid realignment in entorhinal cortexNature 446:190–194
- [65]A manifold neural population code for space in hippocampal coactivity dynamics independent of place fieldsCell Reports 42
- [66]Neuronal code for extended time in the hippocampusProceedings of the National Academy of Sciences 109:19462–19467
- [67]Long-term dynamics of ca1 hippocampal place codesNature neuroscience 16:264–266
- [68]Parallel emergence of stable and dynamic memory engrams in the hippocampusNature 558:292–296
- [69]Persistence of neuronal representations through time and damage in the hippocampusScience 365:821–825
- [70]Distinct place cell dynamics in ca1 and ca3 encode experience in new environmentsNature communications 12
- [71]High-resolution in vivo imaging of hippocampal dendrites and spinesJournal of Neuroscience 24:3147–3151
- [72]Impermanence of dendritic spines in live adult ca1 hippocampusNature 523:592–596
- [73]Chronic 2p-sted imaging reveals high turnover of dendritic spines in the hippocampus in vivoElife 7
- [74]A map of visual space in the primate entorhinal cortexNature 491:761–764
- [75]Organizing conceptual knowledge in humans with a gridlike codeScience 352:1464–1468
- [76]Mapping of a non-spatial dimension by the hippocampal– entorhinal circuitNature 543:719–722
- [77]Human entorhinal cortex represents visual space using a boundary-anchored gridNature neuroscience 21:191–194
- [78]Hexadirectional coding of visual space in human entorhinal cortexNature neuroscience 21:188–190
- [79]Entorhinal cortex receptive fields are modulated by spatial attention, even without movementElife 7
- [80]Self-supervised learning of representations for space generates multi-modular grid cellsAdvances in Neural Information Processing Systems 36
- [81]Fully integrated silicon probes for high-density recording of neural activityNature 551:232–236
- [82]Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordingsScience 372
- [83]Umap: Uniform manifold approximation and projection for dimension reductionarXiv
- [84]Remembered reward locations restructure entorhinal spatial mapsScience 363:1447–1452
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Copyright
© 2024, Redman et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 309
- downloads
- 7
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.