Abstract
Natural experience often involves a continuous series of related images while the subject is immobile. How does the cortico-hippocampal circuit process this information? The hippocampus is crucial for episodic memory1–3, but most rodent single unit studies require spatial exploration4–6 or active engagement7. Hence, we investigated neural responses to a silent, isoluminant, black and white movie in head-fixed mice without any task or locomotion demands, or rewards, from the Allen Brain Observatory. The activity of most neurons (97%, 6554/6785) in the thalamo-cortical visual areas was significantly modulated by the 30s long movie clip. Surprisingly, a third (33%, 3379/10263) of hippocampal –dentate gyrus, CA1 and subiculum– neurons showed movie-selectivity, with elevated firing in specific movie sub-segments, termed movie-fields. Movie-tuning remained intact when mice were immobile or ran spontaneously. On average, a tuned cell had more than 5 movie-fields in visual areas, but only 2 in hippocampal areas. The movie-field durations in all brain regions spanned an unprecedented 1000-fold range: from 0.02s to 20s, termed mega-scale coding. Yet, the total duration of all the movie-fields of a cell was comparable across neurons and brain regions. We hypothesize that hippocampal responses show greater continuous-sequence encoding than visual areas, as evidenced by fewer and broader movie-fields than in visual areas. Consistent with this hypothesis, repeated presentation of the movie images in a fixed, scrambled sequence virtually abolished hippocampal but not visual-cortical selectivity. The enhancement of continuous movie tuning compared to the scrambled sequence was eight-fold greater in hippocampal than visual areas, further supporting episodic-sequence encoding. Thus, all mouse-brain areas investigated encoded segments of the movie. Similar results are likely to hold in primates and humans. Hence, movies could provide a unified way to probe neural mechanisms of episodic information processing and memory, even in immobile subjects, across brain regions, and species.
Introduction
In addition to the position and orientation of simple visual cues, like Gabor patches and drifting gratings8, primary visual cortical responses are also direction selective9, and show predictive coding10, suggesting that the temporal sequence of visual cues influences neural firing. Accordingly, these and higher visual cortical neurons too encode a sequence of visual images, i.e., a movie11–18. The hippocampus is farthest downstream from the retina in the visual circuit. The rodent hippocampal place cells encode spatial or temporal sequences2,19–26 and episode-like responses27–29. However, these responses typically require active locomotion30, and they are thought to be non-sensory responses31. Primate and human hippocampal responses are selective to specific sets of visual cues, e.g., the object-place association32, their short-term1 and longterm33 memories, cognitive boundaries between episodic movies34, and event integration for narrative association35. However, despite strong evidence for the role of hippocampus in episodic memory, the hippocampal encoding of a continuous sequence of images, i.e., a visual episode, is unknown.
Results
Significant movie tuning across cortico-hippocampal areas
We used a publicly available dataset (Allen Brain Observatory – Neuropixels Visual Coding, © 2019 Allen Institute). Mice were monocularly shown a 30s clip of a continuous segment from the movie Touch of Evil (Welles, 1958)36 (Figure 1-figure supplement 1 and Figure 1-Video 1). Mice were head-fixed but were free to run on a circular disk. A total of 17048 broad spiking, active, putatively excitatory neurons were analyzed, recorded using 4-6 Neuropixel probes in 24 sessions from 24 mice (See Methods).
The majority of neurons in the visual areas (Lateral geniculate nucleus LGN, primary visual cortex V1, higher visual areas: antero-medial and posterior-medial AM-PM) were modulated by the movie, consistent with previous reports (Figure 1-figure supplement 2)11–18. Surprisingly, neurons from all parts of the hippocampus (dentate gyrus DG, CA3, CA1, subiculum SUB) were also clearly modulated (Figure 1), with reliable, elevated spiking across many trials in small movie segments. To quantify selectivity in an identical, firing rate- and threshold-independent fashion across brain regions, we computed the z-scored sparsity37–40 of neural selectivity (See Methods). Cells with z-scored sparsity >2 were considered significantly (p<0.03) modulated. Other metrics of selectivity, like depth of modulation or mutual information, provided qualitatively similar results (Figure 1-figure supplement 3). The areas V1 (97.3%) and AM-PM (97.1%) had the largest percentage of movie tuned cells. Similarly, the majority of neurons in LGN (89.2%) too showed significant modulation by the movie. This level of selectivity is much higher than reported earlier11 (∼40%), perhaps because we analyzed extracellular spikes, while the previous study used calcium imaging. On the other hand, the movie selectivity was greater than the selectivity for classic stimuli, like drifting gratings, in V1, even within calcium imaging data, in agreement of reports of better model fit with natural stimuli for primate visual responses41. Direct quantitative comparison across stimuli is difficult and beyond the scope of this study because the movie frames appeared every 30ms, and were preceded by similar images, while classic stimuli were presented for 250ms, in a random order. Thus, the vast majority of thalamo-cortical neurons were significantly modulated by the movie.
Movie selectivity was prevalent in the hippocampal regions too, despite head fixation, dissociation between self-movements and visual cues as well as the absence of rewards, task, or memory demands (Figure 1a-d). Subiculum, the output region of the hippocampus, farthest removed from the retina, had the largest fraction (44.6% Figure 1d) of movie-tuned neurons, followed by the upstream CA1 (33.6%, Figure 1c) and dentate gyrus (33.1%, Figure 1a). However, CA3 movie selectivity was nearly half as much (17.3%, Figure 1b). This is unlike place cells, where CA3 and CA1 selectivity are comparable42,43 and subiculum selectivity is weaker44.
Movie tuning is not an artifact of behavioral or brain state changes
To confirm these findings, we performed several controls. Running alters neural activity in visual areas45–48 and hippocampus49–51. Hence, we used the data from only the stationary epochs (see Methods) and only from sessions with at least 300 seconds of stationary data (17 sessions, 24906 cells). Movie tuning was unchanged in this data (Figure 1-figure supplement 4). This is unlike place cells where spatial selectivity is greatly reduced during immobility5,6. Neurons recorded simultaneously from the same brain region also showed different selectivity patterns (Figure 1-figure supplement 5). Thus, nonspecific effects such as running cannot explain brain wide movie selectivity. Prolonged immobility could change the brain state, e.g., the emergence of sharp-wave ripples. Hence, we removed the data around sharp wave ripples and confirmed that movie tuning was unaffected (Figure 1-figure supplement 6). Strong movie tuned cells were seen in sessions with long bouts of running as well as with predominantly immobile behavior (Figure 1-figure supplement 7), unlike responses to auditory tones, which were lost during running behavior51. Place cell selectivity of hippocampal neurons is influenced by theta rhythm52–54. We compared the movie selectivity during periods of high theta, vs. periods of low theta. Significant movie selectivity in both cases (Figure 1-figure supplement 7). To further assess the effect of changes in brain state, we similarly analyzed movie tuning in two equal subsegments of data, corresponding to epochs with high and low pupil dilation, which is a strong correlate of arousal55–57. Movie tuning was above chance levels in both sub-segments (Figure 1-figure supplement 7). Hence, locomotion, arousal or changes in brain states cannot explain the hippocampal movie tuning.
Similarities and differences between place fields and movie fields
Hippocampal neurons have one or two place fields in typical mazes which take a few seconds to traverse58. In larger arenas that take tens of seconds to traverse, the number of peaks per cell and the peak duration increases59–62. Peak detection for movie tuning is nontrivial because neurons have nonzero background firing rates, and the elevated rates cover a wide range (Figure 1). We developed a novel algorithm to address this (see Methods). On average, V1 neurons had the largest number of movie-fields (Figure 2a, mean±s.e.m.=10.4±0.1, here we use mean instead of median to gain a better resolution for the small and discrete values of number of fields per cell), followed by LGN (8.6±0.3) and AM-PM (6.3±0.07). Hippocampal areas had significantly fewer movie-fields per cell: dentate gyrus (2.1±0.1), CA3 (2.8±0.3), CA1(2.0±0.02) and subiculum (2.1±0.05). Thus, the number of movie-fields per cell was smaller than the number of place-fields per cell in comparably long spatial tracks59–64, but a handful of hippocampal cells had more than 5 movie-fields (Figure 2-figure supplement 1).
Mega-scale structure of movie-fields
Typical receptive field size increases as one moves away from the retina in the visual hierarchy36. A similar effect was seen for movie-field durations. On average, hippocampal movie-fields were longer than visual regions (Figure 2b). But there were many exceptions –movie-fields of LGN (median±s.e.m., here and subsequently, unless stated otherwise, 308.5±33.9 ms) were twice as long as in V1 (156.6±9.2ms). Movie-fields of subiculum (3169.9±169.8 ms) were significantly longer than CA1 (2786.1±77.5 ms) and nearly three-fold longer than the upstream CA3 (979.1±241.1 ms). However, the dentate movie-fields (2113.2±172.4 ms) were two-fold longer than the downstream CA3. This is similar to the patterns reported for CA3, CA1 and dentate gyrus place cells64. But others have claimed that CA3 place fields are slightly bigger than CA165, whereas movie-fields showed the opposite pattern.
The movie-field durations spanned a 500-1000-fold range in every brain region investigated (Figure 2e). This mega-scale scale is unprecedentedly large, nearly 2 orders of magnitude greater than previous reports in place cells59,61. Even individual neurons showed 100-fold mega-scale responses (Figure 2c & d) compared to less than 10-fold scale within single place cells59,61. The mega-scale tuning within a neuron was largest in V1 and smallest in subiculum (Figure 2e). This is partly because the short duration movie-fields in hippocampal regions were typically neither as narrow nor as prominent as in the visual areas (Figure 2-figure supplement 2).
Despite these differences in mega-scale tuning across different brain areas, the total duration of elevated activity, i.e., the cumulative sum of movie-field durations within a single cell, was remarkably conserved across neurons within and across brain regions (Figure 2f). Unlike movie-field durations, which differed by more than ten-fold between hippocampal and visual regions, cumulative durations were quite comparable, ranging from 6.2s (V1) to 10.2s (CA3) (Figure 2f, LGN=8.8±0.21sec, V1=6.2±0.09, AM-PM=7.8±0.09, DG=9.4±0.26, CA3=10.2±0.46, CA1=9.1±0.12, SUB=9.5±0.27). Thus, hippocampal movie-fields are longer and less multi-peaked than visual areas, such that the total duration of elevated activity was similar across all areas, spanning about a fourth of the movie, comparable to the fraction of large environments in which place cells are active61,63,64. To quantify the net activity in the movie-fields, we computed the total firing in the movie-fields (i.e., the area under the curve for the duration of the movie-fields), normalized by the expected discharge from the shuffled response. Unlike the ten-fold variation of movie-field durations, net movie-field discharge was more comparable (<3x variation) across brain areas, but maximal in V1 and least in subiculum (Figure 2g).
Many movie-fields showed elevated activity spanning up to several seconds, suggesting rate-code like encoding (Figure 2h). However, some cells showed movie-fields with elevated spiking restricted to less than 50ms, similar to responses to briefly flashed stimuli in anesthetized cats12,13,66. This is suggestive of a temporal code, characterized by low spike timing jitter67. Such short-duration movie-fields were not only common in the thalamus (LGN), but also AM-PM, three synapses away from the retina. A small fraction of cells in the hippocampal areas, more than five synapses away from the retina, showed such temporally coded fields as well (Figure 2h).
To determine the stability and temporal-continuity of movie tuning across the neural ensembles we computed the population vector overlap between even and odd trials68 (see Methods). Population response stability was significantly greater for tuned than for untuned neurons (Figure 3-figure supplement 1). The population vector overlap around the diagonal was broader in hippocampal regions than visual cortical and LGN, indicating longer temporal-continuity, reflective of their longer movie-fields. Further, the population vector overlap away from the diagonal was larger around frames 400-800 in all brain areas due to the longer movie-fields in that movie segment (see below).
Relationship between movie image content and neural movie tuning
Are all movie frames represented equally by all brain areas? The duration and density of movie-fields varied as a function of the movie frame and brain region (Figure 3-figure supplement 2). We hypothesized that this variation could correspond to the change in visual content from one frame to the next. Hence, we quantified the similarity between adjacent movie frames as the correlation coefficient between corresponding pixels and termed it as frame-to-frame (F2F) image correlation. For comparison, we also quantified the similarity between the neural responses to adjacent frames (F2F neural correlation), as the correlation coefficient between the firing rate response of neuronal ensembles between adjacent frames. For all brain regions, the neural F2F was correlated with image F2F, but this correlation was weaker in hippocampal output regions (CA1 and SUB) than visual regions like LGN and V1. The majority of brain regions had substantially reduced density of movie-fields between the movie frames 400 to 800, but the movie-fields were longer in this region. This effect as well was greater in the visual regions than hippocampal regions. Using significantly tuned neurons, we computed the average neural activity in each brain region at each point in the movie (see Methods). Although movie-fields (Figure 3a), or just the strongest movie-field per cell (Figure 3b), covered the entire movie, the peak normalized, ensemble activity level of all brain regions showed significant overrepresentation, i.e., deviation from the uniformity, in certain parts of the movie (Figure 3c, see Methods). This was most pronounced in V1 and the higher visual areas AM-PM. The number of movie frames with elevated ensemble activity was higher in visual cortical areas than hippocampal regions (Figure 3d), and also this modulation (see Methods) was smaller in hippocampus and LGN, compared to the visual cortical regions (Figure 3e).
Using the significantly tuned neurons, we also computed the average neural activity in each brain region corresponding to each frame in the movie, without peak rate normalization (see Methods). The degree of continuity between the movie frames, quantified as above (F2F image correlation), was inversely correlated with the ensemble rate modulation in all areas except DG, CA3 and CA1 (Figure 3f and g). As expected for a continuous movie, this F2F image correlation was close to unity for most frames, but highest in the latter part of the movie where the images changed more slowly. The population wide elevated firing rates, as well as the smallest movie-fields, occurred during the earlier parts (Figure 3-figure supplement 2). Thus, the movie-code was stronger in the segments with greatest change across movie frames, in agreement with recent reports of visual cortical encoding of flow stimuli69. These results show differential population representation of the movie across brain regions.
Differential neural encoding of sequential versus scrambled movie in visual and hippocampal areas
If these responses were purely visual, a movie made of scrambled sequence of images would generate equally strong or even stronger selectivity due to the even larger change across movie frames, despite the absence of similarity between adjacent frames. To explore this possibility, we investigated neural selectivity when the same movie frames were presented in a fixed but scrambled sequence (scrambled movie, Figure 4-Video 1). The within frame and the total visual content was identical between the continuous and scrambled movies, and the same sequence of images was repeated many times in both experiments (see Methods). But there was no correlation between adjacent frames, i.e., visual continuity, in the latter (Figure 4a).
For all brain regions investigated, the continuous movie generated significantly greater modulation of neural activity than the scrambled sequence (Figure 4b). Middle 20 trials of the continuous movie were chosen as the appropriate subset for comparison since they were chronologically closest to the scrambled movie presentation. This choice ensured that other long-term effects, such as behavioral state change, instability of single unit measurement and representational70 or behavioral71 drift could not account for the differences in neural responses to continuous and scrambled movie presentation. This preference for continuous over scrambled movie was the greatest in hippocampal regions where the percentage of significantly tuned neurons (4.4%, near chance level of 2.3%) reduced more than 4-fold compared to the continuous movie (17.8%, after accounting for the lesser number of trials, see Methods). This was unlike visual areas where the scrambled (80.4%) and the continuous movie (92.4%) generated similar prevalence levels of selectivity (Figure 4b). The few hippocampal cells which had significant selectivity to the scrambled sequence, did not have long-duration responses, but only very short, ∼50ms long responses (Figure 4d), reminiscent of, but even sharper than human hippocampal responses to flashed images33. To estimate the effect of continuous movie compared to the scrambled sequence on individual cells, we computed the normalized difference between the continuous and scrambled movie selectivity for cells which were selective in either condition (Figure 4c, see Methods). This visual continuity index was more than eight-fold higher in hippocampal areas (median values across all 4 hippocampal regions = 87.8%) compared to the visual areas (median = 10.6% across visual regions).
The pattern of increasing visual continuity index as we moved up the visual hierarchy, largely paralleled the anatomic organization72, with the greatest sensitivity to visual continuity in the hippocampal output regions, CA1 and subiculum, but there were notable exceptions. The primary visual cortical neurons showed the least reduction in selectivity due to the loss of temporally contiguous content, whereas LGN neurons, the primary source of input to the visual cortex and closer to the periphery, showed far greater sensitivity (Figure 4c).
Many visual cortical neurons were significantly modulated by the scrambled sequence, but their number of movie-fields per cell was greater and their duration was shorter than during the continuous movie (Figure 4-figure supplement 1&2). This could occur due to the loss of frame-to-frame correlation in the scrambled sequence. The average activity of the neural population in V1 and AM-PM showed significant deviation even with the scrambled movie, comparable to the continuous movie, but this multi-unit ensemble response was uncorrelated with the frame-to-frame correlation in the scrambled sequence (Figure 4-figure supplement 3). A substantial fraction of visual cortical and LGN responses to the scrambled sequence could be rearranged to resemble continuous movie responses (Figure 4-figure supplement 4, see Methods). The latency needed to shift the responses was least in LGN and largest in AM-PM, as expected from the feed-forward anatomy of visual information processing36,72 (Figure 4-figure supplement 4). Unlike visual areas, such rearrangement did not resemble the continuous movie responses in the hippocampal regions (example cells in Figure 4e, also see Figure 4-figure supplement 4 for statistics and details). Further, even after rearranging the hippocampal responses, their selectivity to the scrambled movie presentation remained near chance levels (Figure 4-figure supplement 5).
Population vector decoding of the ensemble of a few hundred place cells is sufficient to decode the rat’s position using place cells73, and the position of a passively moving object40. Using similar methods, we decoded the movie frame number (see Methods). Continuous movie decoding was better than chance in all brain regions analyzed (Figure 4f). Upon accounting for the number of tuned neurons from different brain regions, the decoding was most accurate in V1, and least in dentate gyrus. Scrambled movie decoding was significantly weaker yet above chance level (based on shuffles, see Methods) in visual areas, but not in CA3 and dentate gyrus. But CA1 and subiculum neuronal ensembles could be used to decode scrambled movie frame number slightly above chance levels (Figure 4g). Similarly, the population overlap between even and odd trials for the scrambled sequence was strong for visual areas, and weaker in hippocampal regions, but significantly greater than untuned neurons in hippocampal regions (Figure 4-figure supplement 6). Combined with the handful of neurons in hippocampus whose movie selectivity persisted to the scrambled presentation, this suggests that loss of correlations between adjacent frames in the scrambled sequence abolishes most, but not all of the hippocampal selectivity to visual sequences.
Discussion
Movie tuning in the visual areas
To understand how neurons encode a continuously unfolding visual episode, we investigated the neural responses in the head fixed mouse brain to an isoluminant, black-and-white, silent human movie, without any task demands or rewards. As expected, neural activity showed significant modulation in all thalamo-cortical visual areas, with elevated activity in response to specific parts of the movie, termed movie-fields. Most (96.6%, 6554/6785) of thalamo-cortical neurons showed significant movie tuning. This is nearly double that reported for the classic stimuli such as Gabor patches in the same dataset36, although a direct comparison is difficult due to the differences in experimental and analysis methods. For example, the classic stimuli were presented for 250ms, preceded by a blank background whereas the images changed every 30ms in a movie. On the other hand, significant tuning of the vast majority of visual neurons to movies is consistent with other reports11–13,15,17,66,69–71. Thus, movies are a reliable method to probe the function of the visual brain and its role in cognition.
Movie tuning in hippocampal areas
Remarkably, a third of hippocampal neurons (32.9%, 3379/10263) were also movie-tuned, comparable to the fraction of neurons with significant spatial selectivity in mice74 and bats75, and far greater than significant place cells in the primate hippocampus76–78. While the hippocampus is implicated in episodic memory, rodent hippocampal responses are largely studied in the context of spatial maps or place cells, and more recently in other tasks which requires active locomotion or active engagement7,79. However, unlike place cells5,6, movie-tuning remained intact during immobility in all brain areas studied, which could be because self-motion causes consistent changes in multisensory cues during spatial exploration but not during movie presentation. This dissociation of the effect of mobility on spatial and movie selectivity agrees with the recent reports of dissociated mechanisms of episodic encoding and spatial navigation in human amnesia80. Our results are broadly consistent with prior studies that found movie selectivity in human hippocampal single neurons81. However, that study relied on famous, very familiar movie clips, similar to the highly familiar image selectivity33 to probe episodic memory recall. In contrast, mice in our study had seen this black-and-white, human movie clip only in two prior habituation sessions and it is very unlikely that they understood the episodic content of the movie. Recent studies found human hippocampal activation in response to abrupt changes between different movie clips34,82,83, which is broadly consistent with our findings. Future studies can investigate the nature of hippocampal activation in mice in response to familiar movies to probe episodic memory and recall. These observations support the hypothesis that specific visual cues can create reliable representations in all parts of hippocampus in rodents5,37,40, nonhuman primates76,78 and humans84,85, unlike spatial selectivity which requires consistent information from multisensory cues28,38,86.
Mega-scale nature of movie-fields
Across all brain regions, neurons showed a mega-scale encoding by movie-fields varying in duration by up to 1000-fold, similar to, but far greater than recent reports of 10-fold multi-scale responses in the hippocampus59–64,87. While neural selectivity to movies has been studied in visual areas, such mega-scale coding has not been reported. Remarkably, mega-scale movie coding was found not only across the population but even individual LGN and V1 neurons could show two different movie fields, one lasting less than 100ms and other exceeding 10,000ms. The speed at which visual content changed across movie frames could explain a part, but not all of this effect. The mechanisms governing the mega-scale encoding would require additional studies. For example, the average duration of the movie-field increased along the feed-forward hierarchy, consistent with the hierarchy of response lags during language processing88. Paradoxically, the mega-scale coding of movie field meant the opposite pattern also existed, with 10s long movie fields in some LGN cells while less than 100ms long movie fields in subiculum.
Continuous versus scrambled movie responses
The analysis of scrambled movie-sequence allowed us to compute the neural response latency to movie frames. This was highest in AM-PM (91ms) than V1 (74ms) and least in LGN (60ms), thus following the visual hierarchy. The pattern of movie tuning properties was also broadly consistent between V1 and AM/PM (Fig 2). However, several aspects of movie-tuning did not follow the feed-forward anatomical hierarchy. For example, all metrics of movie selectivity (Fig 2) to the continuous movie showed a consistent pattern that was the inconsistent to the feed-forward anatomical hierarchy: V1 had stronger movie tuning, higher number of movie fields per cell, narrower movie-field widths, larger mega-scale structure, and better decoding than LGN. V1 was also more robust to scrambled sequence than LGN. One possible explanation is that there are other sources of inputs to V1, beyond LGN, that contribute significantly to movie tuning89. Amongst the hippocampal regions, the tuning properties of CA3 neurons (field durations, mega-chronicity index, visual continuity index and several measures of population modulation) were closest to that of visual regions, even though the prevalence of tuning in CA3 was lesser than that in other hippocampal as well as visual areas.
Emergence of episode-like movie code in hippocampus
Temporal integration window90–92 as well as intrinsic timescale of firing36 increase along the anatomical hierarchy in the cortex, with the hippocampus being farthest removed from the retina72. This hierarchical anatomical organization, with visual areas being upstream of hippocampus could explain the longer movie-fields, the strength of tuning, number of movie peaks, their width and decoding accuracy in hippocampal regions. This could also explain the several fold greater preference for the continuous movie over scrambled sequence in the hippocampus compared to the upstream visual areas. But, unlike reports of image-association memory in the inferior temporal cortex for unrelated images93,94, only a handful hippocampal neurons showed selective responses to the scrambled sequence. These results, along with the longer duration of hippocampal movie-fields could mediate visual-chunking or binding of a sequence of events. In fact, evidence for episodic-like chunking of visual information was found in all visual areas as well, where the scrambled-sequence not only reduced neural selectivity but caused fragmentation of movie-fields (Figure 4-figure supplement 4).
No evidence of nonspecific effects
Could the brain-wide mega-scale tuning be an artifact of poor unit isolation, e.g., due to an erroneous mixing of two neurons, one with very short and another with very long movie-fields? This is unlikely since the LGN and visual cortical neural selectivity to classic stimuli (Gabor patches, drifting gratings etc.) in the same dataset was similar to that reported in most studies36 whereas poor unit isolation should reduce these selective responses. However, to directly test this possibility, we calculated the correlation between the unit isolation index (or fraction of refractory violations) and the mega-scale index of the cell, while factoring out the contribution of mean firing rate (Figure 1-figure supplement 8). This correlation was not significant (p>0.05) for any brain areas.
Movie-fields vs. place-fields
Do the movie fields arise from the same mechanism as place fields? Studies have shown that when rodents are passively moved along a linear track that they had explored6, or when the images of the environment around a linear track was played back to them5, some hippocampal neurons generated spatially selective activity. Since the movie clip involved change of spatial view, one could hypothesize that the movie fields are just place fields generated by passive viewing. This is unlikely for several reasons. Mega-scale movie fields were found in the vast majority of all visual areas including LGN, far greater than spatially modulated neurons in the visual cortex during virtual navigation95,96. Further, in prior passive viewing experiments, the rodents were shown the same narrow linear track, like a tunnel, that they had previously explored actively to get food rewards at specific places. In contrast, in current experiments, these mice had never actively explored the space shown in the movie, nor obtained any rewards. Active exploration of a maze, combined with spatially localized rewards engages multisensory mechanisms resulting in increased place cell activation22,28,97 which are entirely missing in these experiments during passive viewing of a movie, presented monocularly, without any other multisensory stimuli and without any rewards. Compared to spontaneous activity about half of CA1 and CA3 neurons shutdown during spatial exploration and this shutdown is even greater in the dentate gyrus. Further, compared to the exploration of a real-world maze, exploration of a visually identical virtual world causes 60% reduction in CA1 place cell activation86. In contrast, there was no evidence of neural shutdown during the movie presentation compared to grey screen spontaneous epochs (Figure 1-figure supplement 8). Similarly, the number of place fields (in CA1) per cell on a long track is positively correlated with the mean firing rate of the cell62, which was not seen here for CA1 movie fields.
A recent study showed that CA1 neurons encode the distance, angle, and movement direction of motion of a vertical bar of light40, consistent with the position of hippocampus in the visual circuitry72. Do those findings predict the movie tuning herein? There are indeed some similarities between the two experimental protocols –purely passive optical motion without any self-motion or rewards. However, there are significant differences too; similar to place cells in the real and virtual worlds38, all the cells tuned to the moving bar of light had single receptive fields with elevated responses lasting a few seconds; there were neither punctate responses nor even 10-fold variation in neural field durations, let alone the 1000-fold change reported here. Finally, those results were reported only in area CA1, while the results presented here cover nearly all the major stations of the visual hierarchy.
Notably, hippocampal neurons did not encode Gabor patches or drifting gratings in the same dataset, indicating the importance of temporally continuous sequences of images for hippocampal activation36. This is consistent with the hypothesis that the hippocampus is involved in coding spatial sequences21,29,98. However, unlike place cells that degrade in immobile rats, hippocampal movie tuning was unchanged in the immobile mouse. Further, the scrambled sequence too was presented in the same sequence many times, yet movie tuning dropped to chance level in the hippocampal areas. Unlike visual areas, scrambled sequence response of hippocampal neurons could not be rearranged to obtain the continuous movie response. This shows the importance of continuous, episodic content instead of mere sequential recurrence of unrelated content for rodent hippocampal activation. We hypothesize that similar to place cells, movie-field responses without task-demand would play a role, to be determined, in episodic memory. Further work involving a behavior report for the episodic content can potentially differentiate between the sequence coding described here and the contribution of episodically meaningful content. However, the nature of movie selectivity tested so far in humans was different (recall of famous, short movie clips81, or at event boundaries34) than in rodents here (human movie, selectivity to specific movie segments).
Broader outlook
Our findings open up the possibility of studying thalamic, cortical, and hippocampal brain regions in a simple, passive, and purely visual experimental paradigm and extend comparable convolutional neural networks11 to have the hippocampus at the apex72. Further, our results here bridge the long-standing gap between the hippocampal rodent and human studies34,99–101, where natural movies can be decoded from fMRI signals in immobile humans102. This brain-wide mega-scale encoding of a human movie episode and enhanced preference for visual continuity in the hippocampus compared to visual areas supports the hypothesis that the rodent hippocampus is involved in non-spatial episodic memories, consistent with classic findings in humans1 and in agreement with a more generalized, representational framework103,104 of episodic memory where it encodes temporal patterns. Similar responses are likely across different species, including primates. Thus, movie-coding can provide a unified platform to investigate the neural mechanisms of episodic coding, learning and memory.
Methods
Experiments
We used the Allen Brain Observatory – Neuropixels Visual Coding dataset (© 2019 Allen Institute, https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels). This website and related publication36 contain detailed experimental protocol, neural recording techniques, spike sorting etc. Data from 24 mice (16 males, n=13-C57BL/6J wild-type, n=2 Pvalb-IRES-Cre×Ai32, n=6 Sst-IRES-Cre×Ai32, and n=3 Vip-IRES-Cre×Ai32) from the “Functional connectivity” dataset was analyzed herein. Prior to implantation with Neuropixel probes, mice passively viewed the entire range of images including drifting gratings, Gabor patches and movies of interest here. Videos of the body and eye movements were obtained at 30Hz and synced to the neural data and stimulus presentation using a photodiode. Movies were presented monocularly on an LCD monitor with a refresh rate of 60Hz, positioned 15cm away from the mouse’s right eye and spanned 120°x95°. 30 trials of the continuous movie presentation were followed by 10 trials of the scrambled movie. Next was a presentation of drifting gratings, followed by a quiet period of 30 minutes where the screen was blank. Then the second block of drifting gratings, scrambled movie and continuous movie was presented. After surgery, all mice were single-housed and maintained on a reverse 12-h light cycle in a shared facility with room temperatures between 20 and 22 °C and humidity between 30 and 70%. All experiments were performed during the dark cycle.
Neural spiking data was sampled at 30 kHz with a 500Hz high pass filter. Spike sorting was automated using Kilosort2105. Output of Kilosort2 was post-processed to remove noise units, characterized by unphysiological waveforms. Neuropixel probes were registered to a common co-ordinate framework106. Each recorded unit was assigned to a recording channel corresponding to the maximum spike amplitude and then to the corresponding brain region. Broad spiking units identified as those with average spike waveform duration (peak to trough) between 0.45 to 1.5ms and those with mean firing rates above 0.5Hz were analyzed throughout, except Figure 1-figure supplement 8.
Movie tuning quantification
The movie consisted of 900 frames: 30s total, 30Hz refresh rate, 33.3ms per frame. At the first level of analysis, spike data were split into 900 bins, each 33.3ms wide (the bin size was later varied systematically to detect mega-scale tuning, see below). The resulting tuning curves were smoothed with a Gaussian window of σ=66.6 ms or 2 frames. The degree of modulation and its significance was estimated by the sparsity s as below, and as previously described40,86.
where rn is the firing rate in the 𝑛𝑡ℎ frame or bin and N=900 is the total number of bins. This is equivalent to “lifetime sparseness”, used previously11,14, except for the normalization factor of (1-1/N), which is close to unity, when N is close to 900 as in the case of movies. Statistical significance of sparsity was computed using a bootstrapping procedure, which does not assume a normal distribution. Briefly, for each cell, the spike train as a function of the frame number from each trial was circularly shifted by different amounts and the sparsity of the randomized data computed. This procedure was repeated 100 times with different amounts of random shifts. The mean value and standard deviation of the sparsity of randomized data were used to compute the z-scored sparsity of observed data using the function zscore in MATLAB. The observed sparsity was considered statistically significant if the z-scored sparsity of the observed spike train was greater 2, which corresponds to p<0.023 in a one tailed t-test. A similar method was used to quantify significance of the scrambled movie tuning, as well as for the subset of data with only stationary epochs, or its equivalent subsample (see below). Middle 20 trials of the continuous movie were used in comparisons with the scrambled movie in Figure 4, to ensure a fair comparison by using same number of trials, with similar time delays across measurements.
In addition to sparsity, we quantified movie tuning using two other measures.
Depth of modulation = (𝑟𝑚𝑎𝑥 - 𝑟𝑚𝑖𝑛)/ (𝑟𝑚𝑎𝑥 + 𝑟𝑚𝑖𝑛), where 𝑟𝑚𝑎𝑥 and 𝑟𝑚𝑖𝑛 are the largest and lowest firing rates across movie frames, respectively.
Mutual information
Where 𝑝(𝐶) = ∑𝑛 𝑝(𝑓𝑟𝑎𝑚𝑒𝑛). 𝑝(𝐶|𝑓𝑟𝑎𝑚𝑒𝑛) and 𝐶 is the average spike count in 0.033 second window which corresponds to 1 movie frame. 𝑝(𝑓𝑟𝑎𝑚𝑒𝑛) is 1/900, as all frames were presented equal number of times. Statistical significance of these alternative measures of selectivity was computed similar to that for sparsity and is detailed in Figure 1-figure supplement 3.
Stationary epoch and sharp wave ripple free epoch identification
To eliminate the confounding effects of changes in behavioral state associated with running, we repeated our analysis in stationary epochs, defined as epochs when the running speed remained less than 2cm/sec for this period, as well as for at least 5 seconds before and after this period. Analysis was further restricted to sessions with at least 5 total minutes of these epochs during the 60 trials of continuous movie presentation. To account for using lesser data of the stationary epochs, we compared the tuning using a random subsample of data, regardless of running or stopping and compared the two results for difference in selectivity.
Similarly, to remove epochs of sharp wave ripples (SWR), we first computed band passed power in the hippocampal (CA1) recording sites in the 150-250Hz range. SWR occurrence was noted if any of the best 5 sites in CA1 (those with highest theta (5-12Hz) to delta (1-4Hz) ratio), or the median SWR across all CA1 sites exceeded their respective 3 standard deviations of power. To remove SWRs, we removed frames corresponding to ±0.5second around the SWR occurrence and recomputed movie tuning in the remaining data. Similar to the stationary epoch calculation above, we compared tuning to an equivalent random subset to account for loss of data.
Pupil dilation and theta power comparisons
To assess the contribution of arousal state on movie tuning, we re-calculated z-scored sparsity in epochs with high vs. low pupil dilation. The pupil was tracked at a 30Hz sampling rate, and the height and width of the elliptical fit as provided in the publicly available dataset was used. For each session, the pupil area thus calculated was split into two equal halves, by using data above and below the 50th percentile. The resultant z-scored sparsity is reported in Figure 1-figure supplement 7.
Similarly, the theta power computed from the band passed local field potential signal in the 5-12Hz range was split into 2 equal data sub segments. The channel from CA1, with the highest average theta to delta (1-4Hz) power ratio was nominated as the channel to be used for these calculations. Movie tuning in data with high and low theta power thus separated is reported in Figure 1-figure supplement 7.
Mega-scale movie-field detection in tuned neurons
For neurons with significant movie-sparsity, i.e., movie-tuned, the movie response was first recalculated at a higher resolution of 3.33ms (10 times the frame rate of 33.3ms). The findpeaks function in MATLAB was used to obtain peaks with prominence larger than 110% (1.1x) the range of firing variation obtained by chance, as determined from a sample shuffled response.
This calculation was repeated at different smoothing values (logarithmically spaced in 10 Gaussian smoothing schemes with σ ranging from 6.7ms to 3430ms), to ensure that long as well as short movie-fields were reliably detected and treated equally. For frames where overlapping peaks were found at different smoothing levels, we employed a comparative algorithm to only select the peak(s) with higher prominence score. This score was obtained as the ratio of the peak’s prominence to the range of fluctuations in the correspondingly smoothed shuffle. This procedure was conducted iteratively, in increasing order of smoothing. If a broad peak overlapped with multiple narrow ones, the sum of scores of the narrow ones was compared with the broad one. To ensure that peaks at the beginning as well as the end of the movie frames were reliably detected, we circularly wrapped the movie response, for the observed as well as shuffle data.
Identifying frames with significant deviations in multiple single-unit activity (MSUA)
First, the average response across tuned neurons for each brain region was computed for each movie frame, after normalizing the response of each cell by the peak firing response. This average response was used as the observed “Multiple single unit activity (MSUA)” in Figure 3. To compute chance level, individual neuron responses were circularly shifted with respect to the movie frames to break the frame to firing rate association but maintain overall firing rate modulation. 100 such shuffles were used, and for each shuffle, the shuffled MSUA response was computed by averaging across neurons. Across these 100 shuffles, mean and standard deviation was obtained for all frames, and used to compute the z-score of the observed MSUA. To obtain significance at p=0.025 level, Bonferroni correction was applied, and the appropriate z-score (4.04) level was chosen. The number of frames in the observed MSUA above (and below) this level were further quantified in Figure 3. The firing deviation for these frames was computed as the ratio between the mean observed MSUA and the mean shuffled MSUA, reported as a percentage, for frames corresponding to z-score greater than +4 or less than -4. To obtain a total firing rate report, where each spike gets equal vote, we computed the total firing response by computing the total rate across all tuned neurons (and averaging by the number of neurons) in Figure 3 and across all neurons in Figure 3-figure supplement 2.
Population Vector Overlap
To evaluate the properties of a population of cells, movie presentations were divided into alternate trials, yielding even and odd blocks68. Population vector overlap was computed between the movie responses calculated separately for these 2 blocks of trials. Population vector overlap between frames x of the even trials & frame y of the odd trials was defined as the Pearson correlation coefficient between the vectors (R1,x, R 2,x, … R N,x) & (R 1,y, R 2,y, … R N,y), where R n,x is the mean firing rate response of the nth neuron to the xth movie frame. N is the total number of neurons used, for each brain region. This calculation was done for x and y ranging from 1 to 900, corresponding to the 900 movie frames. The same method was used for tuned and untuned neurons in continuous movie responses in Figure 3-figure supplement 1, and for scrambled sequence responses in Figure 4-figure supplement 6.
Decoding analysis
Methods similar to those previously described were used40,73. For tuned cells, the 60 trials of continuous movie were each decoded using all other trials. Mean firing rate responses in the 59 trials for 900 frames were used to compute a “look-up” matrix. Each neuron’s response was normalized between 0 and 1. At each frame in the “observed” trial, the correlation coefficient was computed between the population vector response in this trial and the look-up matrix. The frame corresponding to the maximal correlation was denoted as the decoded frame. Decoding error was computed as the average of the absolute difference between actual and decoded frames, across the 900 frames of the movie. For comparison, shuffle data was generated by randomly shuffling the cell-cell pairing of the look-up matrix and “observed response”. To enable a fair comparison of decoding accuracy across brain regions, the tuned cells from each brain region were subsampled, and a random selection of 150 cells was used. A similar procedure was used for the 20 trials of the scrambled sequence, and the corresponding middle 20 trials of the continuous movie were used here for comparison.
Rearranged scrambled movie analysis
To differentiate the effects of visual content versus visual continuity between consecutive frames, we compared the responses of the same neuron to the continuous movie and the scrambled sequence. In the scrambled movie, the same visual frames as the continuous movie were used, but they were shuffled in a pseudo random fashion. The same scrambled sequence was repeated for 20 trials. The neural response was first computed at each frame of the scrambled sequence, keeping the frames in the chronological order of presentation. Then the scrambled sequence of frames was rearranged to recreate the continuous movie and the corresponding neural responses computed. To address the latency between movie frame presentation and its evoked neural response, which can differ across brain regions and neurons, this calculation was repeated for rearranged scrambled sequences with variable delays between τ= -500 to +500 ms (i.e., -150 to +150 frames of 3.33ms resolution, in steps of 5 frames or 16.6ms). The correlation coefficient was computed between the continuous movie response and this variable delayed response at each delay as rmeasured (τ) = corrcoef (Rcontinuous, Rscramble-rearranged(τ)). Rcontinuous is the continuous movie response, obtained at 3.33ms resolution and similarly, Rscramble-rearranged corresponds to the scrambled response after rearrangement, at the latency τ. The latency τ yielding the largest correlation between the continuous and rearranged scrambled movie was designated as the putative response latency for that neuron. This was used in Figure 4-figure supplement 4. The value of rmeasured(τmax) was bootstrapped using 100 randomly generated frame reassignments, and this was used to z-score rmeasured(τmax), with z-score > 2 as criterion for significance. The resultant z-score is reported in Figure 4-figure supplement 4.
The latency τ was rounded off for use with 33ms bins and used to rearrange actual as well as shuffled data to compute the strength of tuning for scrambled presentation. Z-scored sparsity was computed as described above. This was compared with the z-scored sparsity of continuous movie as well as the scrambled movie data, without the rearrangement, and shown in Figure 4-figure supplement 5.
Acknowledgements
We thank the Allen Brain Institute for provision of the dataset, Dr. Josh Siegle for help with the dataset, Dr. Krishna Choudhary for proof-reading of the text and Dr. Massimo Scanziani for input and feedback. This work was supported by grants to M.R.M. by the National Institutes of Health NIH 1U01MH115746.
Competing interests
Authors declare that they have no competing interests.
Data availability
All data is publicly available at the Allen Brain Observatory – Neuropixels Visual Coding dataset (© 2019 Allen Institute, https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels).
Code availability
All analyses were performed using custom-written code in MATLAB version R2020a. Codes written for analysis and visualization are available on GitHub, at https://github.com/cspurandare/ELife_MovieTuning107.
Video Legends
Figure 1-video 1 | Sequential movie. The 30 second movie clip shown, along with the frame number indicated in the top right corner (updated every second, or every 30 frames). The same movie clip was shown in two blocks of 30 repeats each.
Figure 4-video 1 | Scrambled movie. Frames from the sequential video clip (Figure 1-video 1) were presented in a scrambled sequence, with the same sequence repeated 20 times (2 blocks of 10 trials each). Frame numbers in the scrambled sequence are indicated in the top right corner.
Supplementary Figures
References
- 1.LOSS OF RECENT MEMORY AFTER BILATERAL HIPPOCAMPAL LESIONSJ. Neurol. Neurosurg. Psychiat https://doi.org/10.1136/jnnp.20.1.11
- 2.Hippocampal “Time Cells” Bridge the Gap in Memory for Discontiguous EventsNeuron 71:737–749
- 3.Differential effects of early hippocampal pathology on episodic and semantic memoryScience (1979) 277:376–380
- 4.The hippocampus as a cognitive mapClarendon Press
- 5.How vision and movement combine in the hippocampal place codeProceedings of the National Academy of Sciences 110:378–383
- 6.Spatial Selectivity of Rat Hippocampal Neurons: Dependence on Preparedness for MovementScience (1979) 244:1580–1582
- 7.Mapping of a non-spatial dimension by the hippocampal– entorhinal circuitNature 543:719–722
- 8.Receptive fields of single neurones in the cat’s striate cortexJ Physiol 148:574–591
- 9.The orientation and direction selectivity of cells in macaque visual cortexVision Res 22:531–544
- 10.Activity recall in a visual cortical ensembleNat Neurosci 15:449–455
- 11.A large-scale standardized physiological survey reveals functional organization of the mouse visual cortexNat Neurosci 23:138–151
- 12.Heterogeneity in the Responses of Adjacent Neurons to Natural Stimuli in Cat Striate CortexJ Neurophysiol 97:1326–1341
- 13.Natural movies evoke spike trains with low spike time variability in cat primary visual cortexJournal of Neuroscience 31:15844–15860
- 14.Sparse Coding and Decorrelation in Primary Visual Cortex During Natural VisionScience (1979) 287:1273–1276
- 15.Population code in mouse V1 facilitates readout of natural scenes through increased sparsenessNature Neuroscience 2014 17:851–857
- 16.Dynamics and sources of response variability and its coordination in visual cortexVis Neurosci https://doi.org/10.1017/S0952523819000117
- 17.Natural movies evoke spike trains with low spike time variability in cat primary visual cortexJournal of Neuroscience 31:107–121
- 18.Representation of visual scenes by local neuronal populations in layer 2/3 of mouse visual cortexFront Neural Circuits 5
- 19.Experience-Dependent Asymmetric Shape of Hippocampal Receptive FieldsNeuron 25:707–715
- 20.From Hippocampus to V1: Effect of LTP on Spatio-Temporal Dynamics of Receptive FieldsNeurocomputing 32:905–911
- 21.From synaptic plasticity to spatial maps and sequence learningHippocampus 25:756–762
- 22.Experience-dependent, asymmetric expansion of hippocampal place fieldsProc Natl Acad Sci U S A 94
- 23.Memory, navigation and theta rhythm in the hippocampal-entorhinal systemhttps://doi.org/10.1038/nn.3304
- 24.The Same Hippocampal CA1 Population Simultaneously Codes Temporal Information over Multiple TimescalesCurrent Biology 28:1499–1508
- 25.During Running in Place, Grid Cells Integrate Elapsed Time and Distance RunNeuron 88:578–589
- 26.Hippocampal ‘time cells’: time versus path integrationNeuron 78:1090–1101
- 27.Internally generated cell assembly sequences in the rat hippocampusScience (1979) 321:1322–1327
- 28.Linking hippocampal multiplexed tuning, Hebbian plasticity and navigationNature 2021 599 7885:442–448
- 29.Space and Time: The Hippocampus as a Sequence GeneratorTrends in Cognitive Sciences 22:853–869
- 30.Deciphering the hippocampal polyglot: the hippocampus as a path integration systemJ Exp Biol 199:173–85
- 31.The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving ratBrain Res 34:171–5
- 32.A selective mnemonic role for the hippocampus in monkeys: Memory for the location of objectsJournal of Neuroscience 8:4159–4167
- 33.Invariant visual representation by single neurons in the human brain:1102–1107
- 34.Neurons detect cognitive boundaries to structure episodic memories in humansNature Neuroscience 2022 25:358–368
- 35.The hippocampus constructs narrative memories across distant eventsCurrent Biology 31:4935–4945
- 36.Survey of spiking in the mouse visual system reveals functional hierarchyNature 592:86–92
- 37.Causal Influence of Visual Cues on Hippocampal Directional SelectivityCell 164:197–207
- 38.Impaired spatial selectivity and intact phase precession in two-dimensional virtual realityNat Neurosci 18:121–128
- 39.Theta phase precession in hippocampal neuronal populations and the compression of temporal sequencesHippocampus 6:149–172
- 40.Moving bar of light evokes vectorial spatial selectivity in the immobile rat hippocampusNature 2022 1–7 https://doi.org/10.1038/s41586-022-04404-x
- 41.Natural Stimulus Statistics Alter the Receptive Field Structure of V1 NeuronsJournal of Neuroscience 24:6991–7006
- 42.Spatial Selectivity of Unit Activity in the Hippocampal Granular LayerHIPPOCAMPUS 3
- 43.A Quarter of a Century of Place CellsNeuron 17:813–822
- 44.Spatial correlates of firing patterns of single cells in the subiculum of the freely moving ratJournal of Neuroscience 14:2339–2356
- 45.Modulation of Visual Responses by Behavioral State in Mouse Visual CortexNeuron 65:472–479
- 46.Effects of Locomotion Extend throughout the Mouse Early Visual SystemCurrent Biology 24:2899–2907
- 47.Reduced neural activity but improved coding in rodent higher-order visual cortex during locomotionNature Communications 2022 13:1–8
- 48.Identification of a brainstem circuit regulating visual cortical state in parallel with locomotionNeuron 83:455–466
- 49.Characterizing Speed Cells in the Rat HippocampusCell Rep 25:1872–1884
- 50.Spatial and Behavioral Correlates of Hippocampal Neuronal Activity
- 51.Spatial tuning and brain state account for dorsal hippocampal CA1 activity in a non-spatial learning taskElife 5
- 52.Hippocampal theta sequencesHippocampus 17:1093–1099
- 53.Control of timing, rate and bursts of hippocampal place cells by dendritic and somatic inhibitionNature Neuroscience 2012 15:769–775
- 54.Theta phase-specific codes for two-dimensional position, trajectory and heading in the hippocampusNATURE NEUROSCIENCE VOLUME 11
- 55.Arousal and Locomotion Make Distinct Contributions to Cortical Activity Patterns and Visual EncodingNeuron 86:740–754
- 56.Arousal Modulates Retinal OutputNeuron 107
- 57.Arousal increases the representational capacity of cortical tissuehttps://doi.org/10.1007/s10827-009-0138-6
- 58.O’keefe, J. & Burgess, N. Geometric determinants of the place fields of hippocampal neurons.
- 59.Multiscale representation of very large environments in the hippocampus of flying batsScience (1979) 372
- 60.Finite Scale of Spatial Representation in the HippocampusScience (1979) 321:140–143
- 61.Dorsal CA1 hippocampal place cells form a multi-scale representation of megaspaceCurrent Biology 31:2178–2190
- 62.Large environments reveal the statistical structure governing hippocampal representationsScience (1979) 345:814–817
- 63.Behavioral/Systems/Cognitive Unmasking the CA1 Ensemble Place Code by Exposures to Small and Large Environments: More Place Cells and Multiple, Irregularly Arranged, and Expanded Place Fields in the Larger Spacehttps://doi.org/10.1523/JNEUROSCI.2862-08.2008
- 64.Ensemble Place Codes in Hippocampus: CA1, CA3, and Dentate Gyrus Place Cells Have Multiple Place Fields in Large EnvironmentsPLoS One 6
- 65.Functional Differences in the Backward Shifts of CA1 and CA3 Place Fields in Novel and Familiar EnvironmentsPLoS One 7
- 66.Stable representation of a naturalistic movie emerges from episodic activity with 2 gain variabilityNature Communications 2021 12:1–15
- 67.Synfire Chains and Cortical Songs: Temporal Modules of Cortical ActivityScience (1979) 304:559–564
- 68.The Effects of GluA1 Deletion on the Hippocampal Population Code for PositionJ Neurosci 32:8952–68
- 69.Flow stimuli reveal ecologically appropriate responses in mouse visual cortexProc Natl Acad Sci U S A 115:11304–11309
- 70.Representational drift in the mouse visual cortexCurrent Biology 31:4327–4339
- 71.Contribution of behavioural variability to representational driftElife 11
- 72.Distributed hierarchical processing in the primate cerebral cortexCereb Cortex 1:1–47
- 73.Dynamics of the hippocampal ensemble code for spaceScience (1979) 261:1055–8
- 74.Disrupted Place Cell Remapping and Impaired Grid Cells in a Knockin Model of Alzheimer’s DiseaseNeuron 107:1095–1112
- 75.Grid cells without theta oscillations in the entorhinal cortex of batsNature https://doi.org/10.1038/nature10583
- 76.View-responsive neurons in the primate hippocampal complexHippocampus 5:409–424
- 77.Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humansHippocampus https://doi.org/10.1002/HIPO.23467
- 78.Spatial modulation of hippocampal activity in freely moving macaquesNeuron 0
- 79.Spatial representations of self and other in the hippocampusScience (1979) 359:213–218
- 80.Largely intact memory for spatial locations during navigation in an individual with dense amnesiaNeuropsychologia 170
- 81.Internally generated reactivation of single neurons in human hippocampus during free recallScience (1979) 322:96–101
- 82.The hippocampus constructs narrative memories across distant eventsCurrent Biology 31:4935–4945
- 83.Flexible reuse of cortico-hippocampal representations during encoding and recall of naturalistic eventsNature Communications 2023 14:1–15
- 84.A sense of direction in human entorhinal cortexProc Natl Acad Sci U S A 107:6487–6492
- 85.Cellular networks underlying human spatial navigationNature 2003 425
- 86.Multisensory control of hippocampal spatiotemporal selectivityScience 340:1342–6
- 87.A Role for the Longitudinal Axis of the Hippocampus in Multiscale Representations of Large and Complex Spatial Environments and Mnemonic HierarchiesThe Hippocampus - Plasticity and Functions https://doi.org/10.5772/INTECHOPEN.71165
- 88.Information flow across the cortical timescale hierarchy during narrative constructionProc Natl Acad Sci U S A 119
- 89.Robust effects of corticothalamic feedback and behavioral state on movie responses in mouse dLGNElife 11
- 90.Multiscale temporal integration organizes hierarchical computation in human auditory cortexNat Hum Behav 6
- 91.Temporal tuning properties along the human ventral visual streamJournal of Neuroscience 32
- 92.A Hierarchy of Temporal Receptive Windows in Human CortexJournal of Neuroscience 28:2539–2550
- 93.Neural organization for the long-term memory ofNature 354:152–155
- 94.Neuronal correlate of visual associative long-term memory in the primate temporal cortexNature 1988 335
- 95.Activities of visual cortical and hippocampal neurons co-fluctuate in freely moving rats during spatial behaviorElife 4
- 96.Coherent encoding of subjective spatial position in visual cortex and hippocampusNature 562:124–127
- 97.Expansion and Shift of Hippocampal Place Fields: Evidence for Synaptic Potentiation during BehaviorComputational Neuroscience :741–745https://doi.org/10.1007/978-1-4757-9800-5_115
- 98.Sequence learning and the role of the hippocampus in rodent navigation:294–300https://doi.org/10.1016/j.conb.2011.12.005
- 99.Single-trial learning of novel stimuli by individual neurons of the human hippocampus-amygdala complexNeuron 49:805–813
- 100.Representation of contralateral visual space in the human hippocampus 1 2https://doi.org/10.1101/2020.07.30.228361
- 101.The Human Brain Encodes a Chronicle of Visual Events at Each Instant of Time Through the Multiplexing of Traveling WavesJournal of Neuroscience 41:7224–7233
- 102.Reconstructing visual experiences from brain activity evoked by natural movies:1641–1646
- 103.The hippocampus: part of an interactive posterior representational system spanning perceptual and memorial systemsJ Exp Psychol Gen 142:1242–1254
- 104.Update on Memory Systems and ProcessesNeuropsychopharmacology 2011 36:251–273
- 105.Spontaneous behaviors drive multidimensional, brainwide activityhttps://doi.org/10.1126/science.aav7893
- 106.The Allen Mouse Brain Common Coordinate Framework: A 3D Reference Atlas ll The Allen Mouse Brain Common Coordinate Framework: A 3D Reference AtlasCell 181:936–953
- 107.Code and datasets generated and needed to reproduce results in upcoming ELife paper
- 108.Quality Metrics to Accompany Spike Sorting of Extracellular SignalsJournal of Neuroscience 31:8699–8705
- 109.Quantitative measures of cluster quality for use in extracellular recordingsNeuroscience 131:1–11
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
- Version of Record published:
Copyright
© 2023, Chinmay S. Purandare & Mayank R. Mehta
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 3,111
- downloads
- 142
- citations
- 4
Views, downloads and citations are aggregated across all versions of this paper published by eLife.