Subspace generalisation across environments in grid and place cells in data from Chen et al. 2018.

a. The cumulative variance explained by the eigenvectors (EVs) calculated using the activity of the grid (black) or place (green) cells, within (solid lines) and across (dotted lines) environments. Subspace generalization is calculated as the difference between the area under the curve (AUC) of two lines. The difference between the black lines is small, indicating generalisation of grid cells across environments. The difference between the green lines is larger, indicating remapping of place cells (p<0.001, permutation test, see Methods).

b. The difference between the within and across (solid and dashed lines in a., respectively) environments AUCs of the cumulative variance explained by grid or place cells (black or green lines in a., respectively). Data shown for all mice with enough grid or place cells (>10 recorded cells of the same type, each bar is a mouse and a specific projection (i.e. projecting on environment one or two)). The differences between the grid cells AUCs are significantly smaller than the place cells (p < 0.001 permutation test, see supplementary for more statistical analyses and specific examples).

c. An example of the cumulative variance explained by the eigenvectors, calculated using the constructed low-resolution version of grid and place cells data. The solid and dotted lines are average over 10 samples and the shaded areas represent the standard error of the mean across samples. Here, as above, the solid lines are projection within environment and the dotted lines are projections between environments.

d. Subspace generalization in the low resolution version of the data captures the same generalization properties of grid vs place cells. The distributions were created via bootstrapping over cells from the same animal, averaging their activity, concatenating the samples across all animals and calculating the AUC difference between within and across environments projections (p<<0.001 Kolmogorov Simonov test).

simulated voxels from simulated grid modules

a. Examples of simulated voxels activity map in the two environments, without noise. upper: higher frequency module, lower: lower frequency module. Cells are grouped into voxels randomly.

b. Same as a. but with cells grouped into voxels according to the grid phase. Note the different scale of the color-bar between a. and b.

c. Subspace generalization plot for the 16 simulated voxels, where the grouping into voxels is either random (left) or according to phase (right). Legend as in d.

d. Left: AUCs of the subspace generalisation plots in c. as a function of the ratio of random vs phase-organised cells in the voxels, with no noise (black) or with high amplitude of noise (blue). Without noise (black lines), subspace generalization measure (AUC) remains high even when the fraction of randomly sampled cells increases. However, in the presence of noise, subspace generalization measure decreases with the fraction of randomly sampled cells. Right: p-value of the effect according to the permutation distribution (see methods, shaded area: standard error of the mean). In the presence of noise and when the cells are sampled randomly, AUCwithin-between becomes non-significant, see supplementary info Figure S3 for the dependency of the permutation distributions on the presence of noise and sampling.

e. Same as d., except the continuous X-axis variable is the noise amplitude, for either of phase-organized (black) or randomly organized voxels (red). AUC decreases sharply with noise amplitude when the cells are sampled randomly, while it decreases more slowly when the cells are sampled according to phase. The decrease in AUC to chance level (i.e. AUC = 0.5) with the increase in noise amplitude results in insignificant difference in subspace generalization measure (AUCwithin-between). See supplementary info Figure S3 for the permutation distributions.

Experimental design and behavior. A.

Example of an associative graph. Participants were never exposed to this top-down view of the graph - they learned the graph by viewing a series of pairs of neighboring images, corresponding to a walk on the graph. To aid memorisation, we asked participants to internally invent stories that connect the images. B. Each participant learned 4 graphs: two with a hexagonal lattice structure (both learned on days 1 and 2) and two with a community structure (both learned on days 3 and 4). For each structural form, there was one larger graph and one smaller graph. The nodes of graphs with approximately the same size were drawn from the same set of images. C-F. In each day of training we used four tests to probe the knowledge of the graphs, as well as to promote further learning. In all tests, participants performed above chance level on all days and improved their performance between the first and second days of learning a graph. C. Participants were asked whether an image X can appear between images Y and Z (one sided t-test against chance level (50%): hex day1 t(27) = 31.2, p < 10^-22 ; hex day2 t(27) = 35.5, p < 10^-23 ; comm day3 t(27) = 26.9, p < 10^-20 ; comm day4 t(27) = 34.2, p < 10^-23 ; paired one sided t-test between first and second day for each structural form: hex t(27) = 4.78, p < 10^-5 ; comm t(27) = 3.49, p < 10^-3). D. Participants were shown two 3-long image sequences, and were asked whether a target image can be the fourth image in the first, second or both of the sequences (one sided t-test against chance level (33.33%): hex day1 t(27) = 39.9, p < 10^-25 ; hex day2 t(27) = 42.3, p < 10^-25 ; comm day3 t(27) = 44.8, p < 10^-26 ; comm day4 t(27) = 44.2, p < 10^-26 ; paired one sided t-test between first and second day for each structural form: hex t(27) = 3.97, p < 10^-3 ; comm t(27) = 2.81, p < 10^-2). E. Participants were asked whether an image X is closer to image Y or image Z, Y and Z are not neighbors of X on the graph (one sided t-test against chance level (50%): hex day1 t(27) = 12.6, p < 10^-12 ; hex day2 t(27) = 12.5, p < 10^-12 ; comm day3 t(27) = 5.06, p < 10^-4 ; comm day4 t(27) = 7.42, p < 10^-07; paired one sided t-test between first and second day for each structural form: hex t(27) = 3.44, p < 10^-3 ; comm t(27) = 2.88, p < 10^-2). F. Participants were asked to navigate from a start image X to a target image Y. In each step, the participant had to choose between two (randomly selected) neighbors of the current image. The participant repeatedly made these choices until they arrived at the target image (paired one sided t-test between number of steps taken to reach the target in first and second day for each structural form. Left: trials with initial distance of 2 edges between start and target images: hex t(27) = 2.57, p < 10^-2 ; comm t(27) = 2.41, p < 10^-2; MIddle: initial distance of 3 edges: hex t(27) = 2.58, p < 10^-2 ; comm t(27) = 4.67, p < 10^-2; Right: trials with initial distance of 4 edges: hex t(27) = 3.02, p < 10^-2 ; comm t(27) = 3.69, p < 10^-3). Note that while feedback was given for the local tests in panels C and D, no feedback was given for the tests in panels E-F to ensure that participants were not directly exposed to any non-local relations. The location of different options on the screen was randomised for all tests. Hex: hexagonal lattice graphs. Comm: community structure graphs.

fMRI experiment and analysis method (subspace generalisation)

a. Each fMRI block starts with 70s of random walk on the graph: a pair of pictures appears on the screen, each time a participant presses enter a new picture appears on the screen and the previous picture appears behind (similar to the three pictures sequence, sell below). During this phase participants are instructed to infer which “pictures set” (i.e graph) they are currently playing with. Note that fMRI data from this phase of the task is not included in the current manuscript.

b. The three pictures sequence: three pictures appear one after the other, while previous picture/s still appear on the screen.

c. Each block starts with the random walk (panel a). Following the random walk, sequences of three pictures appear on the screen. Every few sequences there was a catch trial in which we ask participants to determine whether the questioned picture can appear next on the sequence.

d. Subspace generalisation method on fMRI voxels. Each searchlight extracts a beta X voxels’ coefficients (of 3-images sequences) matrix for each graph in each run (therefore, there are four such matrices). Then, using cross-validation across runs, the left out run matrix of one graph is projected on the EVs from the (average of 3 runs of the) other graph. Following the projections, we calculate the cumulative percentage of variance explained and the area under this curve for each pair of graphs. This leads to a 4 X 4 subspace generalization matrix that is then being averaged over the four runs (see main text and methods for more details). The colors of this matrix indicate our original hypothesis for the study: that in EC, graphs with the same structure would have larger (brighter) AUCs than graphs with different structures (darker).

subspace generalisation in visual and structural representations.

a. Subspace generalisation of visual representations in LOC. Left: difference in subspace generalization was computed between different blocks that included the same stimuli with subspace generalization computed between blocks of different stimuli while controlling for the graph structure, i.e [HlHl + ClCl + HsHs +CsCs] - [HlHs + HsHl + ClCs + CsCl]. t(27)_peak = 4.96, P_tfce < 0.05 over LOC. Right: visualization of the subspace generalisation matrix (averaged over all LOC voxels with t>2 for the [HlHl + ClCl + HsHs +CsCs] - [HlHs + HsHl + ClCs + CsCl] contrast, i.e. green minus red entries.

b. EC generalises over the structure of hexagonal graphs. Left: the effect for the contrast [HlHl + HlHs + HsHl + HsHs] - [HlCl + HlCs + HsCl + HsCs], i.e. the difference between subspace generalisation of hexagonal graphs data, when projected on eigenvectors calculated from (cross-validated) hexagonal graphs (green elements in right panel) vs community structure graphs (red elements). t(27)_peak = 4.2, P_tfce <0.01 over EC. Right: Same as in a. right but for the [HlHl + HlHs + HsHl + HsHs] - [HlCl + HlCs + HsCl + HsCs] contrast in EC.

c. The average effect in an ROI from Baram et al. (green cluster in figure 3d of Baram et al.) for each participant. Star denotes the mean, error bars are SEM.