Abstract
Humans conceptualize time in terms of space, allowing flexible time construals from various perspectives. We can travel internally through a timeline to remember the past and imagine the future (i.e., mental time travel) or watch from an external standpoint to have a panoramic view of history (i.e., mental time watching). However, the neural mechanisms that support these flexible temporal construals remain unclear. To investigate this, we asked participants to learn a fictional religious ritual of 15 events. During fMRI scanning, they were guided to consider the event series from either an internal or external perspective in different tasks. Behavioral results confirmed the success of our manipulation, showing the expected symbolic distance effect in the internal-perspective task and the reverse effect in the external-perspective task. We found that the activation level in the posterior parietal cortex correlated positively with sequential distance in the external-perspective task but negatively in the internal-perspective task. In contrast, the activation level in the anterior hippocampus positively correlated with sequential distance regardless of the observer’s perspectives. These results suggest that the hippocampus stores the memory of the event sequences allocentrically in a perspective-agnostic manner. Conversely, the posterior parietal cortex retrieves event sequences egocentrically from the optimal perspective for the current task context. Such complementary allocentric and egocentric representations support both the stability of memory storage and the flexibility of time construals.
Introduction
How the brain represents time remains mysterious. In prospective timing, a starting point is defined beforehand, and the neural activity tracks the elapsed time like a stopwatch. This neural stopwatch is manifested in various forms, including ramping activity, sequential activity, and neural trajectories. (e.g., see reviews by Buonomano & Laje, 2010; Wittmann, 2013; Eichenbaum, 2014; Tsao et al., 2022). However, a stopwatch can only track duration in real time. How can we escape from the present, being able to remember the past or imagine the future?
One solution, which might be unique to humans, is to conceptualize time in terms of space (i.e., the spatial construals of time hypothesis or the conceptual metaphor theory, e.g., Traugott, 1978, Lakoff & Johnson 1980; see recent reviews by Núñez & Cooperrider, 2013; Sinha & Gärdenfors, 2014). This is achieved by segmenting time into events—the basic temporal entities the observer conceives to have a beginning and an end (Zacks & Tversky, 2001)—and ordering these temporal entities in space so that events occurring at different times can be easily maintained in the working memory (Abrahamse et al., 2014; Figure 1A). This reconstructive process consists of two core time concepts: duration and sequence.

Spatial construal of time and experimental design.
(A) The schematic diagram of spatial construal of time. It illustrates two core time concepts (sequence and duration) and two major perspectives on event series (mental time travel and watching). (B) Stimulus: a fictional religious ritual of 15 events following a specific sequence, enduring particular durations, and happening on predetermined parts of the day. To minimize potential confounds between the semantic content of the event phrases and the temporal structure of the events, we randomly assigned the phrases to the events, creating two versions for participants with even and odd ID numbers. Only one version is illustrated here. Both versions can be seen in Figure1—figure supplement 1 and Figure 1—source data 1. (C) Task paradigm. In the external-perspective task, participants judged whether the target events happened in the same part of the day as the reference event. In the internal-perspective task, participants imagined themselves doing the reference events and judged whether the target event happened in the past or will happen in the future.

Reaction time analysis.
For diagnostic purposes, we plotted the partial residuals of each significant predictor significantly influencing the reaction time (RT) of the corrected trials. The partial residual includes the effect of each variable, its interaction with Task Type, and the residuals from the full linear regression model. A. RT increased with Syllable Length, showing a similar trend across both tasks. B. Sequential Distance affected RT in opposite directions depending on the perspectives. C. Event Duration influenced RT only in the external-perspective task, with no effect in the internal-perspective task.
Unlike prospective timing tracking the continuous passage of time, durations in time construals are event-based (Tsao et al., 2022): the intervals’ boundaries are constituted by events (Sinha & Gärdenfors, 2014), and the duration of events reflect their span (Núñez & Cooperrider, 2013; Figure 1A). Neurocognitive evidence suggests that the neural representation of duration engages distinct brain systems. The motor system—particularly the supplementary motor area—has been associated with prospective timing (e.g., Protopapa et al., 2019; Nani et al., 2019; De Kock et al., 2021; Robbe, 2023), whereas the hippocampus is considered to support the representation of duration embedded within an event sequence (e.g., Barnett et al., 2014; Thavabalasingam et al., 2018; see also the comprehensive review by Lee et al., 2020).
As for event sequence, one century ago, the philosopher John McTaggart proposed the distinction between two core time construals: the A-series and the B-series (McTaggart, 1908). The A-series assumes a deictic center of time—the observer’s subjective now—as the reference, and orders events as being in the past or the future according to this deictic center. The B-series concerns the order of events in a sequence regardless of the observer’s subjective now. The distinction between A- and B-series has been widely echoed in various accounts of the temporal frames of reference under different names in the cognitive linguistic analyses (see reviews by Bender & Beller, 2014).
In general terms, these two core time construals can be understood through two complementary perspectives: an internal and an external one (Núñez & Cooperrider, 2013; Tversky & Jamalian, 2021; Figure 1A). The internal perspective on time series is akin to the “route” perspective in the spatial domain (Siegel & White, 1975). It aligns with the cognitive process called “mental time travel” (Tulving, 1984, 2002; Suddendorf et al., 2009). The traveler can project themself into any event in the time series and redefine past and future according to their self-location—their subjective now. In this sense, the brain is no longer a stopwatch but a time machine, taking the traveler back and forth in time (Buonomano, 2017). The A-series is typically constructed from this internal viewpoint. By contrast, the external perspective on time series is akin to the “survey” perspective in the spatial domain (Siegel & White, 1975). It relates to the cognitive process called “mental time watching” (Stocker, 2012, 2014). The watcher, who is outside of the time series, can have a panoramic view of multiple events at different times and localize them relative to one another or to external temporal landmarks (e.g., sunrise and sunset in a day or historical events in the long term). In this sense, the brain is more like a dimensional ascension device, taking the watcher out of the one-dimensional timeline to an external viewpoint in higher-dimensional space. The B-series is generally constructed from this external viewpoint.
Recent studies have already begun to investigate the neural representation of the memorized event sequence (e.g., Deuker et al., 2016; Thavabalasingam et al., 2018; Bellmund et al., 2019, 2022; see reviews by Cohn-Sheehy & Ranganath, 2017; Bellmund et al., 2020). Yet, the neural mechanisms that enable the brain to construct distinct construals of an event sequence remain largely unknown. Valuable insights may be drawn from research in the spatial domain, which differentiates the neural representation in allocentric and egocentric reference frames. According to an influential neurocomputational model (Byrne et al., 2007; Bicanski & Burgess, 2018; Bicanski & Burgess, 2020), allocentric and egocentric spatial representations are dissociable in the brain—they are respectively implemented in the medial temporal lobe (MTL)—including the hippocampus—and the parietal cortex. Various egocentric representations in the parietal cortex derived from different viewpoints can be transformed and integrated into a unified allocentric representation and stored in the MTL (i.e., bottom-up process). Conversely, the allocentric representation in the MTL can serve as a template for reconstructing diverse egocentric representations across different viewpoints in the parietal cortex (i.e., top-down process).
In line with the spatial construals of time hypothesis, several authors have recently suggested that such mutually engaged egocentric and allocentric reference frames (in the parietal cortex and the medial temporal lobe, respectively) proposed in the spatial domain might also apply to the temporal one (e.g., Gauthier & van Wassenhove, 2016ab; Gauthier et al., 2019, 2020; Bottini & Doeller, 2020). If this hypothesis holds, it could explain how the brain flexibly generates diverse construals of the same event sequence. Specifically, the hippocampus may encode a consistent representation of an event sequence that is independent of whether an individual adopts an internal or external perspective, reflecting an allocentric representation of time. In contrast, parietal cortical representations are expected to vary flexibly with the adopted perspective that is shaped by task demands, reflecting an egocentric representation of time.
This functional magnetic resonance imaging (fMRI) study aimed to directly test this hypothesis by systematically investigating the neural mechanisms underlying the time construals of event sequence and duration. The event series was a fictional religious ritual of 15 events that participants learned the day before scanning (Figure 1B). Such a ritual had all the core temporal elements: the constituent events followed a specific sequence, endured particular durations, and happened on predetermined parts of the day (i.e., the external temporal landmarks). Participants learned these core temporal elements by reading the description and imagining going through the events one after another. The post-learning test suggests that all participants learned the temporal structure of the ritual before scanning (see Methods for details of the learning and testing procedure).
During the fMRI scanning, participants performed two tasks guiding them to consider the event series from internal and external perspectives (i.e., mental time travel vs. mental time watching; Figure 1C). In each trial, participants saw two sequential event phrases. The first was the reference event, and the second was the target event. In the external-perspective task, participants localized events relative to external temporal boundaries, judging whether the target event happened in the same or different part of the day as the reference event. In the internal-perspective task, participants were instructed to project themselves into the reference event and localize the target event relative to temporal point, judging whether the target event happened in the future or the past of the reference event (see Methods for details of the scanning procedure).
Results
Time was processed differently from internal and external perspectives
Participants had significantly greater accuracy in the external-perspective task than the internal-perspective task (external-perspective task: M = 93.5%, SD = 4.7%; internal-perspective task: M = 89.5%, SD = 8.1%; paired t(31) = 3.33, p = 0.002). The reaction time (RT) of the corrected trials in the external-perspective task was also significantly shorter than the internal-perspective task (external-perspective task: M = 1475 ms, SD = 529 ms; internal-perspective task: M = 1578 ms, SD = 587 ms; fixed effect of Task Type in a random-intercept-and-slope linear mixed model with Participant as the random-effect grouping factor: F(1, 31) = 27.44, p < 0.001).
To further explore the factors affecting the RT of the correct trials, we built a random-intercept linear mixed model with Participant as the random effects grouping factor. Fixed effects variables included Sequential Distance (i.e., number of events between the reference and the target events), Duration (i.e., duration of the target events), Syllable Length (i.e., number of syllables of the phrase of the target events), Task Type (i.e., external-vs. internal-perspective tasks), and the interaction between Task Type and all the other variables (Table 1). As a sanity check, we found a significant main effect of Syllable Length (F(1, 6918) = 35.59, p < 0.001) since it was expected that participants spent longer time reading longer phrases. Intriguingly, we also found significant interaction effects between Task Type and Sequential Distance (F(1, 6918) = 28.22, p < 0.001) and Task Type and Duration (F(1, 6918) = 12.81, p < 0.001).

The reaction time of the corrected trials indicates that time is differently processed under internal and external perspectives1.
1Linear Mixed Model Formula: RT ∼ 1 + Task Type * (Sequential Distance + Duration + Syllable Length) + (1 | Participant) 2The significant effects were highlighted. We did not highlight the significant main effects if the corresponding interaction effects were also significant.
Sequential Distance was correlated positively with RT in the external-perspective task (z = 3.80, p < 0.001) but negatively in the internal-perspective task (z = -3.71, p < 0.001). The negative correlation between RT and egocentric distance has been consistently observed in previous studies in which participants engaged in temporal self-projection: participants make more effort to differentiate past and future for the events close to their own temporal position (e.g., Arzy et al., 2008; Arzy et al., 2009ab; Gauthier & Wassenhove, 2016ab; Gauthier et al., 2019). This pattern can broadly be attributed to the symbolic distance (SD) effect (Moyer & Landauer, 1967; Moyer & Bayer, 1976; Shepard & Judd, 1979), which refers to the fact that “time needed to compare two symbols varies inversely with the distance between their referents on the judged dimension” (Moyer & Bayer, 1976, p. 229). The positive correlation between RT and sequential distance from an external perspective was predicted by Gauthier & van Wassenhove (2016a). This prediction was inspired by classic studies on mental scanning using visual imagery (e.g., Shepard & Metzler, 1971; Kosslyn et al., 1978): From an external perspective, people might compare two referents by mentally traversing the intermediate states between them, resulting in longer times to scan longer sequential distances.
As for Duration, it had a significantly negative trend with RT in the external-perspective task (z = -4.44, p < 0.001) but not in the internal-perspective task (z = 0.66, p = 0.51). A possible explanation is that events with longer durations were more salient and thus easier to be compared with external landmarks.
To ensure that the behavior outputs in neither internal- nor external-perspective tasks confounded the effects above, we built another linear mixed model incorporating four additional variables: Same/Different parts of the day, Future/Past, and their respective interactions with Task Type (Supplementary File 1). The interaction effects between Task Type and both Sequential Distance and Duration remained significant.
Furthermore, we found an additional interaction effect between Task Type and Same/Different parts of the day (F(1, 6914) = 70.49, p < 0.001). This effect was in line with the effect of Sequential Distance, with two events in the same and different parts of the day corresponding to the short and long sequential distances, respectively. This effect paralleled that of Sequential Distance, as events occurring within the same or different parts of the day corresponded to shorter and longer sequential distances, respectively. This pattern can be interpreted in terms of a categorical effect: sequential distances within the same day part were perceived as shorter (i.e., a chunking effect), whereas distances spanning different day parts were perceived as longer (i.e., a boundary effect).
The behavioral results suggest that our attempt to induce different perspectives on the event series was successful. The two tasks induced RT as distinct functions of sequential distance, in line with the predicted SD effect for the internal perspective and reverse-SD effect for the external perspective (Gauthier & van Wassenhove, 2016a).
Internal-compared to external-perspective task activated different brain networks
We first directly contrasted the activity level between external- and internal-perspective tasks in the time window of the target events (Figure 3A; Table 2; see Figure 3—figure supplement 1A for a surface view; voxel level p < 0.001, cluster-level FWE corrected p < 0.05). Compared with the external-perspective task, the internal-perspective task specifically activated the regions in the default network (DN) in the right hemisphere. They were the precuneus (PreC), the retrosplenial cortex (RSC), the superior frontal gyrus (SFG), and the angular gyrus (AG). This finding aligned with evidence indicating that the DN plays a crucial role in self-projection (see the review by Buckner & Carroll, 2007). In Figure 3—figure supplement 1, we also compared the significant clusters in the internal-perspective task with the two subnetworks of the DN: DN-A and DN-B (e.g., Braga & Buckner, 2017; Reznik et al., 2013). The internal-perspective mostly engages DN-A rather than DN-B. This observation is consistent with existing evidence suggesting that DN-A is more closely associated with episodic memory, whereas DN-B is primarily involved in social processing (e.g., Lin et al., 2018; DiNicola et al., 2020).

Neural correlates of specific perspectives and syllable length.
(A) Univariate contrast between external-perspective and internal-perspective tasks (voxel-level p < 0.001, cluster-level FWE corrected p < 0.05). All the significant areas were in the right hemisphere. PreC: precuneus; RSC: the retrosplenial cortex; SFG: the superior frontal gyrus; AG: angular gyrus; SMA: supplementary motor area; SMG: supramarginal gyrus. (B) Parametric modulation of syllable length as a sanity check (voxel-level p < 0.001, cluster-level FWE corrected p < 0.05). The activation level in the anterior part of the left superior temporal gyrus and the visual cortex positively correlated with syllable length. R = Right Hemisphere; L = Left Hemisphere.

Univariate contrast between internal- and external-perspective tasks (p < 0.001, cluster-level FWE corrected p < 0.05 across the whole cortex).
1The average Montreal Neurological Institute coordinates of all the significant voxels of each cluster. The precuneus and the retrosplenial cortex were connected as one cluster under the threshold p < 0.001 (z > 3.09). In this case, we increased the threshold to the point when the precuneus and the retrosplenial cortex were separate (z > 3.3) and calculated the average coordinates of each cluster. 2The voxel size is 3 × 3 × 3 mm3.
Compared with the internal-perspective task, the external-perspective task specifically activated the supplementary motor area (SMA) and the supramarginal gyrus (SMG) in the right hemisphere (Figure 3A; Table 2; voxel level p < 0.001, cluster-level FWE corrected p < 0.05). This finding is open to interpretation. The area found in the right SMG was centered at the junction region between the posterior part of the SMG and the posterior part of the superior temporal gyrus. Previous evidence has shown that this temporoparietal junction area relates to the out-of-body experience (e.g., see reviews by Blanke et al., 2004; Blanke & Arzy, 2005) or plays a role in overcoming egocentric emotion bias towards others (e.g., Silani et al., 2013). Thus, the right SMG region found here may be important for the mental construction of an external perspective.
The right posterior parietal cortex implemented opposite representations of sequential distance across external and internal perspectives
Next, we used the parametric modulation method to detect neural correlates of the temporal information (i.e., Sequential Distance and Duration) across external- and internal-perspective tasks. We built a single general linear model (GLM) in which the target events were simultaneously modulated by their duration and their sequential distances to the reference events. The target events in external- and internal-perspective tasks were treated separately. To detect whether the temporal information was represented differently from different perspectives, we first examined the interaction effect between each temporal information and the task type. If the interaction effect was not significant, we also examined the main effect combining both tasks.
As a sanity check, we investigated whether the employed parametric modulation method could successfully reveal the effect of the word length in the visual cortex, given that our stimuli were visually presented. To do so, we used Syllable Length as the parameter to modulate the condition of both the reference and the target events in the above GLM. We found not only the visual cortex but also the left superior temporal gyrus in the language network, of which the activity level positively correlated with the number of syllables (Figure 3B; voxel level p < 0.001, cluster-level FWE corrected p < 0.05). This result confirmed our prediction and validated our methodology.
To detect the regions in which the activation level was modulated by Sequential Distance (i.e., the number of events between the reference and the target events), we first searched regions with a significant interaction effect between Task Type (i.e., external-vs. internal-perspective tasks) and Sequential Distance (Figure 4A-C). The only significant region we could find across the whole cortex was localized in the border area between the angular gyrus and the superior division of the lateral occipital cortex in the right hemisphere (i.e., the boundary area between Broadman area 7, 19, and 39; Figure 4A; voxel level p < 0.001, cluster-level FWE corrected p < 0.05). More specifically, this region is mostly at the lateral wall of the posterior intraparietal sulcus (hIP5: 56.2%, hIP6: 9.5%, hIP4: 5.9%; Richter et al., 2019) and the posterior part of the angular gyrus (PGp: 21.4%; Caspers et al., 2006, 2008) (assignment based on maximum probability map; Eickhoff et al., 2005). The MNI coordinate of the center voxel is 38, -69, 35.

Neural correlates of event sequence.
A-C: Interaction effect between Task Type (i.e., external-vs. internal-perspective tasks) and Sequential Distance. (A) The only cortical region showing a significant interaction effect was localized in the right posterior parietal cortex (voxel-level p < 0.001, cluster-level FWE corrected p < 0.05). (B) Regions of interest analysis shows that the activation level in the right posterior parietal cortex correlated with sequential distance positively in the external-perspective task and negatively in the internal-perspective task. (C) A further illustration of the relations between the activation level in the right posterior parietal cortex and sequential distance in the two tasks. The error bar indicates the standard error relative to the mean, and the shaded band around the linear regression line indicates 95% confidence interval. D-F: main effect of Sequential Distance. (D) The right hippocampal head shows a significant main effect of Sequential Distance within the mask of the bilateral hippocampus (voxel-level p < 0.001, cluster-level FWE corrected p < 0.05; voxel-level p < 0.05 for illustration purposes). (E) Regions of interest analysis shows that the correlation between the activation level in the right hippocampal head and the sequential distance was independent of perspectives. (F) A further illustration of the relations between the activation level in the right hippocampal head and sequential distance in the two tasks. The error bar indicates the standard error relative to the mean, and the shaded band around the linear regression line indicates 95% confidence interval. R = Right Hemisphere. **: p < 0.01; ***: p < 0.001.
For convenience, we will refer to this region as the posterior parietal cortex (PPC) in the following text. Figure 4B further shows the beta estimates of Sequential Distance in the parametric modulation analysis in the right PPC. The activation level in this region correlated with sequential distance positively in the external-perspective task (t(31) = 2.97, P = 0.006) but negatively in the internal-perspective task (t(31) = -4.19, P < 0.001).
Here, changes in activity levels within the PPC were found to align with RT. Whether to control for RT’s influence on fMRI activation represents a well-known paradox. On the one hand, RT reflects underlying cognitive processes and therefore should not be fully controlled for. On the other hand, RT can independently influence neural activity, as several brain networks vary with RT irrespective of the specific cognitive process involved—a domain-general effect. For instance, regions within the multiple-demand network are often positively correlated with RT and task difficulty across diverse cognitive domains (e.g., Fedorenko et al., 2013; Mumford et al., 2024). To evaluate the second possibility, we conducted an additional control analysis by including trial-by-trial RT as a parametric modulator in the first-level model (see Methods). Notably, the same PPC region remained the only area in the entire brain showing a significant interaction between Task Type and Sequential Distance (voxel-level p < 0.001, cluster-level FWE-corrected p < 0.05). This finding indicates that PPC activity cannot be fully attributed to RT. Furthermore, we do not interpret the effect as reflecting a domain-general RT influence, as regions within the multiple-demand system—typically sensitive to RT and task difficulty—did not exhibit significant activation in our data.
To provide a direct illustration of how the activation level in the right PPC varied according to the sequential distances in the two perspectives, we built another GLM with each sequential distance (i.e., 1 to 5) in each task as a separate condition. Figure 4C confirms that the activation level in the external-perspective task went up as the sequential distance increased, and the activation level in the internal-perspective task went down as the target event moved far away from the reference event where participants self-projected themselves.
These results suggest that the parietal cortex implements an egocentric representation of the event sequence, which varies with different perspectives.
The right hippocampal head implemented consistent representations of sequential distance across external and internal perspectives
We did not find any regions in the hippocampus showing a significant interaction effect between Task Type and Sequential Distance. Instead, we found a region in the right hippocampal head in which the activation level positively correlated with the sequential distance when we combined external- and internal-perspective tasks (Figure 4D; voxel level p < 0.001, cluster-level FWE corrected p < 0.05). The MNI coordinate of the center voxel was 25, -11, -15. Figure 4E further shows that we did not find that such a positive correlation with the sequential distance significantly differed in the external- and internal-perspective tasks (paired t(31) = -0.906, p = 0.372).
We also illustrate the activation level in this hippocampal region of each sequential distance in external-and internal-perspective tasks, respectively (Figure 4F): the activation level in the right hippocampus tended to go up as the sequential distance increased regardless of the tasks.
These results suggest that the hippocampus implements an allocentric representation of the event sequence independent of the perspectives, contrary to the egocentric representation in the posterior parietal cortex.
The right hippocampal body implemented the representation of event duration in the internal-perspective task
In the cortex, we did not detect any regions where the activation level showed a significant interaction effect between Task Type and Duration or a positive main effect of Duration. In the hippocampus, we also found no regions where the activation level showed a significant interaction effect between Task Type and Duration. Instead, we found a region in the right hippocampal body where the activation level showed a significantly positive correlation with Duration when combining two tasks (Figure 5A; voxel level p < 0.001, cluster-level FWE corrected p < 0.05). The MNI coordinate of the center voxel is 39, -24, -12. However, Figure 5B shows that directly comparing the beta estimates in the two tasks reveals a significant result (paired t(31) = 3.07, p = 0.004). The mean activation level in this area positively correlated with Duration only in the internal-perspective task (t(31) = 5.54, p < 0.001), not in the external-perspective task (t(31) = 0.68, p = 0.502). This later analysis is circular, as the ROI was defined as the voxels showing a significantly positive correlation with Duration combining two tasks in the first place. Nevertheless, it illustrated that this positive correlation in the voxel-level analysis was driven only by the internal-perspective task. Due to the stringent threshold of multiple comparisons across voxels, this interaction effect between Task Type and Duration was not found in the voxel-level analysis.

Neural correlates of event duration.
(A) The right hippocampal body shows a significant main effect of Duration within the mask of the bilateral hippocampus (voxel-level p < 0.001, cluster-level FWE corrected p < 0.05; voxel-level p < 0.05 for illustration purposes). (B) However, regions of interest analysis shows that the correlation between the activation level in the right hippocampal body and Duration significantly differs in the internal- and the external-perspective task. (C) A further illustration of the relations between the activation level in the right hippocampal body and duration in the two tasks. The error bar indicates the standard error relative to the mean, and the shaded band around the linear regression line indicates 95% confidence interval. (D) Directly comparing the effects of Sequential Distance and Duration in the head and the body of the hippocampus shows a double dissociation pattern: the hippocampal head represented Sequential Distance but not Duration, while the hippocampal body represented Duration but not Sequential Distance. R = Right Hemisphere. ***: p < 0.001; n.s.: not significant.
Figure 5C directly illustrates how the activation level in this hippocampal region varied according to duration: the activation level in the right hippocampal body went up as the duration increased only in the internal-perspective task but not in the external-perspective task.
The difference in duration representation between the two tasks remains open to interpretation. One possible explanation is that the hippocampus is preferentially involved in memory for durations embedded within event sequences (see review by Lee et al., 2020). In the internal-perspective task, participants indeed localized events within the event sequence itself. By contrast, the external-perspective task encouraged participants to compare the event sequence with external temporal landmarks, which may have attenuated the hippocampal representation of duration.
We also implemented a post hoc analysis to fully illustrate the Sequential Distance and Duration effects in the internal-perspective task within the regions of interest of the hippocampal Head and the Body (Figure 5D). These two regions were defined as the regions where activation level significantly positively correlates with Sequential Distance and Duration, respectively. No significant evidence shows that the hippocampal head also represented Duration (t(31) = 0.506, p = 0.616), and no significant evidence shows that the hippocampal body also represented Sequential Distance (t(31) = -0.616, p = 0.543). Further confirmatory analyses are needed to verify such a double dissociation pattern between the neural representation of sequence and duration in the head and the body of the hippocampus.
Discussion
This study investigated the neural correlates of sequence and duration in task contexts where participants took internal or external perspectives on event series, thereby testing the complementary allocentric and egocentric reference frames in the temporal domain. We found that both the right PPC and the right hippocampal head represented the sequential distance between the reference and the target events. However, the representation in the right PPC varied with the perspective taken; its activity level correlated with sequential distance positively in the external perspective task and negatively in the internal perspective task. In contrast, the activation level in the right hippocampal head positively correlated with sequential distance regardless of the perspective taken. Moreover, we found that the activation level in the right hippocampal body positively correlated with the event duration in the internal-perspective task.
The negative correlation between the activation level in the right PPC and sequential distance has already been observed in a previous fMRI study by Gauthier & van Wassenhove (2016b). In their study, the participants were instructed to mentally position themselves at a specific time point and judge whether a target event occurred before or after that time point. The authors identified a similar brain region (reported MNI coordinates of the peak voxel: 42, −70, 40), closely matching the activation observed in the present study (MNI coordinates of the peak voxel: 39, −70, 35). In both studies, activation in this region increased as the target event approached the self-positioned time point, which aligns with the evidence suggesting that the posterior parietal cortex implements egocentric representations. For example, neuropsychological studies have demonstrated that patients with lesions in the bilateral or unilateral right PPC have “egocentric disorientation” (Aguirre & D’Esposito, 1999): they are unable to localize objects in relation with themselves (e.g., Case 2: Levine et al., 1985; Patient DW: Stark, 1996; Patient MU: Wilson et al., 1997, 2005). Consistently, we found in a recent fMRI experiment that the distributed activity pattern in the bilateral PPC could encode egocentric but not allocentric directions of the target objects during memory retrieval (Dutriaux et al., 2024).
What is novel here is that the correlation between the activation level of the right PPC and sequential distance was reversed to a positive one when an external perspective was used. Since an external perspective is often associated with the allocentric reference frame (e.g., O’Keefe & Nadel, 1978; Arzy & Schacter, 2019), our results seem to challenge the view that the parietal cortex implements egocentric representation. However, the conflict is more apparent than real. Perspective-taking, whether internal or external, by definition, must be egocentric, as mental images must be constructed from a specific viewpoint (see similar arguments by Vogeley & Fink, 2003; Filimon, 2015). Therefore, a brain region that implements egocentric representation should vary its activity with the perspectives taken, as shown in the right PPC. This finding is crucial; it suggests that the distance coding in the right PPC is not a “perspective agnostic” magnitude effect, a possibility that previous fMRI studies could not definitively rule out.
Several previous studies have already quested the nature of the PPC’s function (e.g., see reviews by Andersen et al., 1997; Cohen & Andersen, 2002; Wagner et al., 2005; Abrahamse et al., 2014; Ciaramelli et al., 2008; Cabeza et al., 2008; Hutchinson et al., 2009; Whitlock, 2017; Sestieri et al., 2017; Summerfield et al., 2020; Bottini & Doeller, 2020). Here, we want to highlight three crucial findings. First, neuropsychological studies indicate that lesions in the bilateral PPC or lateral occipitoparietal cortex will lead to not only egocentric disorientation but also simultanagnosia, one of the key components of Bálint’s syndrome (Bálint, 1909; Rizzo & Vecera, 2001; Chechlacz & Humphreys, 2014; e.g., WF: Holmes and Horax, 1919; MVV: Kase et al., 1977; MU: Wilson et al., 1997, 2005). Patients with simultanagnosia cannot perceive multiple entities as a whole and comprehend the overall meaning of a scene. Second, electrophysiological recordings in the lateral intraparietal area of the macaque cortex find the gain-field neurons, which can be used to transform the reference frames anchored to different body parts (e.g., Andersen et al., 1985; Zipser & Andersen, 1988; Andersen et al., 1997; Cohen & Andersen, 2002). Third, the fMRI studies in humans show that the particular PPC area found in our study (i.e., the hIP5 area in the lateral wall of the intraparietal sulcus and the dorsal PGp in the angular gyrus) are more engaged in episodic memory retrieval, in contrast to the medial wall of the intraparietal sulcus and dorsal part of the PPC more involved in perceptual attention (e.g., Hutchinson et al., 2009; Sestieri et al., 2010; Sestieri et al., 2017). Taking all these findings together, the PPC area identified in this study might contribute to memory retrieval in egocentric reference frames, which maintains relations among multiple memorized entities in the working memory from a perspective optimal for the current task context (Abrahamse et al., 2014). Such relational schema, constructed from stereotypical perspectives, is also represented in the PPC as a template to guide attention (Summerfield et al., 2020; Bottini & Doeller, 2020). It is thus not surprising that the two most common perspectives for time construals—mental time travel and mental time watching—involve the PPC.
In contrast to the PPC, we observed that the activation level in the head of the right hippocampus positively correlated with the sequential distance, regardless of the perspectives. Previous studies have already shown that the hippocampal activation level correlates with distance (e.g., Morgan et al., 2011; Howard et al., 2014; Garvert et al., 2017; Theves et al., 2019; Viganò et al., 2023), and the distributed activity in the hippocampus can encode distance (e.g., Deuker et al., 2016; Park et al., 2021). Most studies reported hippocampal effects either bilaterally or predominantly in the right hemisphere, whereas only one study identified the effect in the left hippocampus instead (i.e., Morgan et al., 2011). Our study is novel in showing that this distance coding in the hippocampus is independent of perspectives, indicating an allocentric representation of the event series. This finding aligns with the hypothesis in the spatial domain suggesting the hippocampus as an allocentric cognitive map (e.g., O’keefe & Nadel, 1978; Byrne et al., 2007; Bicanski & Burgess, 2018; Bicanski & Burgess, 2020; Bottini & Doeller, 2020). The positive correlation between the sequential distance and the hippocampal activation level might be mediated by adaptation (e.g., Grill-Spector & Malach, 2001): the longer the distance, the less the adaptation, and the greater the hippocampal activation. One way to interpret such perspective-agnostic representation in the hippocampus is to view it as the associations among memorized entities (e.g., Muller et al., 1996; Eichenbaum, 2014, 2017; Buzsáki & Tingley, 2018; Quiroga, 2019). As a result, the allocentric representation in the hippocampus may not require a “reference frame” because there is no fixed reference point. Each entity is represented among the relations to others, contrasting with the egocentric representation in the parietal cortex, where the reference point is clearly the self.
In this context, the distinction between the allocentric and the egocentric representations of an event series can also be understood in terms of memory storage and retrieval (also see Byrne et al., 2007). The hippocampus stores event series in a static and perspective-agnostic manner, while the PPC flexibly retrieves memory by constructing egocentric images tied to variable perspectives. Supporting this hypothesis is evidence that bilateral hippocampal lesions result in memory deficit for event series (e.g., Dede et al., 2016), whereas bilateral PPC lesions impair free recall but do not cause memory loss (e.g., Berryhill et al., 2007). Patients can still recall details of past events when prompted with cues or questions but struggle to access memories spontaneously.
Finally, the study indicates that the event durations were represented in the right hippocampal body, in line with previous studies suggesting that the hippocampus contributes to representing durations that are embedded within the event sequence (e.g., Barnett et al., 2014; Thavabalasingam et al., 2018; Lee et al., 2020). Notably, the post hoc analysis reveals a double dissociation pattern: the hippocampal head represented sequential distances between events, whereas the hippocampal body represented the duration of individual events. Evidence from the spatial domain has suggested that the anterior hippocampus (or the ventral rodent hippocampus) implements global and gist-like representations (e.g., larger receptive fields), whereas the posterior hippocampus (or the dorsal rodent hippocampus) implements local and detailed ones (e.g., finer receptive fields) (e.g., Jung et al., 1994; Kjelstrup et al., 2008; Collin et al., 2015; see reviews by Poppenk et al., 2013; Robin & Moscovitch, 2017; see Strange et al., 2014 for a different opinion). Recent evidence further shows that the organizational principle observed along the hippocampal long axis may also extend to the temporal domain (Montagrin et al., 2024). In that study, the anterior hippocampus showed greater activation for remote goals, whereas the posterior hippocampus was more strongly engaged for current goals, which are presumed to be represented in finer detail. Thus, the hippocampus likely represents the sequence of events hierarchically along its longitudinal axis: from the gist event sequence to the event sequence constituting a gist event, down to the unitary time bin sequence constituting an event as its duration. This view unifies the hippocampus’s role in sequence representation and explains why it only represents durations embedded within event sequences. Future studies are required to confirm this hypothesis and investigate whether such hierarchical structures along the longitude axis of the hippocampus can also be used to represent the events at different time scales (e.g., from years to seconds). It would also be intriguing to examine whether the parietal cortex has a similar hierarchical structure or plays a role in zoom-in and zoom-out events at different scales to maintain an appropriate working memory load.
To conclude, this study reveals the neural correlates of sequence and duration in time construals. The hippocampal head represents the event sequence allocentrically, irrespective of the observer’s perspectives, whereas the hippocampal body represents the event duration embedded in the event sequence. The posterior parietal cortex flexibly constructs the egocentric representation of the event series that varies according to the observer’s perspectives, which could be internal (i.e., mental time travel) or external (i.e., mental time watching). Such allocentric and egocentric representations of event series can be interpreted in terms of memory storage and retrieval.
Methods
Participants
Thirty-five native Italian speakers with no history of neurobiological or psychiatric disorders participated in the experiment. The sample size was chosen to align with the upper range of participant numbers reported in previous fMRI studies that successfully detected sequence or distance effects in the hippocampus (N = 15–34; e.g., Morgan et al., 2011; Howard et al., 2014; Deuker et al., 2016; Garvert et al., 2017; Theves et al., 2019; Park et al., 2021; Cristoforetti et al., 2022). The ethical committee of the University of Trento approved the experimental protocol (Approval Number 2019-018), and all participants provided written informed consent and were paid for their time. Three participants were excluded. Two were due to poor behavioral performance during scanning; their accuracy was below 1.5 times the interquartile range below the lower quartile across participants. One was due to excessive head motion during scanning; this participant’s mean frame displacement index (Power et al., 2014) was above 1.5 times the interquartile range above the upper quartile across participants. The remaining 32 participants entered the analysis (19 females, 13 males; age: 19-34; M = 24.0; SD = 3.8; all participants except one were right-handed).
Stimuli
We created a fictional religious ritual as the stimulus. Like most rituals, it follows a specific sequence, endures particular durations, and happens on predetermined parts of the day (Figure 1A). This created ritual comprised 15 events, falling into three parts of the day, i.e., morning, afternoon, and evening. Each part of the day included five events and lasted six hours in total: two events lasted for half an hour, one for one hour, and the other two for two hours. We instantiated each event with an event phrase. To minimize the potential confounding between the time information of the events and the semantic information of the phrases, we randomly assigned 15 phrases to the events twice, generating two versions for even and odd numbers of participants.
Procedures
The day before fMRI scanning, participants learned the temporal information of the ritual: the sequence, the durations, and the parts of the day. The learning procedure included two phases. The first was the reading phase, and the second was the imagination phase. The two phases combined were performed twice.
In the reading phase, participants read a narrative describing the whole ritual on a computer screen twice. The computer screen presented one sentence at a time, and each sentence provided information on one event. The presentation sequence was identical to the event sequence of the fictional ritual, and the sentences described the duration and the parts of the day of each event. Participants read the sentences at their own pace by pressing the spacebar to read the following sentence.
In the imagination phase, participants imagined themselves performing the whole ritual one event after another, guided by the prerecorded auditory instructions delivered through the headphones. Each event started with a voice telling the event to be imagined (e.g., “light some candle”). A single beep was then played to indicate the start of the imagination, and a double beep sound indicated the end of the imagination. The imagining sequence was the same as the actual sequence, the imagining duration (i.e., the interval between the single and the double beep) was proportional to the actual duration (30 seconds / 1 hour), and the parts of the day were indicated in the instruction (e.g., “the morning starts”). Participants were told to close their eyes to avoid distractions and imagined the whole ritual four times in each imagining phase (i.e., eight times in total).
After learning, participants’ knowledge of the ritual was assessed with three tests. In the event-sequence test, participants judged whether one event happened in the past or the future of another event. In the event-duration test, participants imagined the ritual in a self-paced manner and pressed the button when finishing the imagination of each event. In the parts-of-the-day test, participants judged whether two events happened in the same or different parts of the day. All participants’ performance was greater than 80% in both the event-sequence and parts-of-the-day tests. In the event-duration test, Pearson’s correlation between the self-paced imagining duration and the actual duration across 15 events was greater than 0.6 in all participants.
The scanning consisted of six runs, each with one external-perspective task block and one internal-perspective task block. The order of these two task blocks was interleaved across the six runs within each participant, and the order of these two task blocks in the first run was counterbalanced across participants. That means half the participants followed the order “EI-IE-EI-IE-EI-IE”, and half of the participants followed the order “IE-EI-IE-EI-IE-EI” (“E” indicating the external-perspective task block and “I” indicating the internal-perspective task block). Each task block started with a 5s task prompt indicating the task of this block and a 3s countdown presentation (i.e., the screen presenting “3”, “2”, “1”). The task block had 20 trials that were identical in the two task blocks within the same run but were presented in a randomized order.
In each trial, the phrases of two events were visually presented one after the other (Figure 1C). Participants first saw a 0.5s fixation cross, the phrase of the reference event for 0.8s, and a blank screen for 3.2s (75% trials) or 7.2s (25% trials). They then saw another 0.5s fixation cross and the phrase of the target event. Each event phrase was presented in the center of a grey screen in two rows (black color, Calibri font): in the first row was the verb of the phrase (i.e., “light”), and in the second row was the object of this verb “some candle”). When presenting the target event phrase, we also presented two possible answers under the target event phrase; they were smaller in font size and colored differently depending on the task (i.e., red for the external-perspective task and blue for the internal-perspective task). In the external-perspective task block, the participants judged whether the target event happened in the same or different part of the day of the reference event, and the two option words were “same” and “different”. In the internal-perspective task block, the participants were instructed to project themselves into the reference event and judge whether the target event happened in the past or will happen in the future, and the two option words were “past” and “future”.
Participants were instructed to perform the two tasks once they saw the target event phrase by pressing two buttons with their right index and middle fingers, corresponding to the left and the right options on the screen, respectively. The reaction time was calculated as the duration between the onset of the target event phrase and the button press. A blank screen would replace the target event presentation once the response was given, and the total duration of the target event and the blank screen was maintained as 4s. On 75% of occasions, the subsequent trial started immediately; On the other 25%, there would be another 4s blank screen between trials.
We carefully selected two sets of 20 event pairs from the 210 possible combinations, assigning them to the odd and even runs of the fMRI experiment. Using a brute-force search, we identified 20 pairs in which sequential distance showed only weak correlations with positional information for both reference and target events (ranging from 1 to 15), as well as with behavioral responses (Same vs. Different and Future vs. Past, coded as 0 and 1), with all correlation coefficients below 0.2. At the same time, we balanced the proportion of correct responses across conditions: for the external-perspective task, Same/Different = 11/9 and 12/8; for the internal-perspective task, Future/Past = 12/8 and 8/12. Under these constraints, the sequential distances in both sets ranged from 1 to 5. To further mitigate spatial response biases, we pseudorandomized the left/right on-screen positions of the two response options within each task block, while ensuring an equal number of correct responses mapped to the left and right buttons (i.e., 10 per block).
Behavior analysis
To investigate the factors affecting trial-by-trial RT of the correct trials, we built linear mixed models (LMM) with Participant as the random effects grouping factor using the lme4 package (Bates et al., 2015). We fit the maximal model including the random intercept and all the random slopes consistent with the experimental design (as recommended by Barr et al., 2013). In the case of overfitting (singular fit), we removed the random slopes but kept the random intercept (Matuschek et al., 2017).
MRI acquisition
MRI data were acquired using a MAGNETOM Prisma 3T MR scanner (Siemens) with a 64-channel head-neck coil at the Centre for Mind/Brain Sciences, University of Trento. Functional images were acquired using the simultaneous multislice echoplanar imaging sequence: the scanning plane was parallel to the long axis of the hippocampus, the phase encoding direction was from anterior to posterior, repetition time (TR) = 1000 ms, echo time (TE) = 28 ms, flip angle (FA) = 59°, field of view (FOV) = 200 mm × 200 mm, matrix size = 66 × 66, 65 axial slices, slices thickness (ST) = 3 mm, no gap, voxel size = 3.03 × 3.03 × 3 mm, multiband factor = 5.
Field maps were acquired between each pair of functional runs to correct for geometric distortions of these two runs. Each set of field maps included three images, one phase-drift image between two slightly different echo times and two magnitude images for each of these two echo times: the scanning plane was parallel to the long axis of the hippocampus, the phase encoding direction was from anterior to posterior, TR = 1030 ms, shorter TE = 4.92 ms, longer TE = 7.38 ms, FA = 60°, FOV = 210 mm × 210 mm, matrix size = 70 × 70, 66 axial slices, ST = 3 mm, no gap, voxel size = 3 × 3 × 3 mm.
Three-dimensional T1-weighted images were acquired using the magnetization-prepared rapid gradient-echo sequence, sagittal plane, TR = 2530 ms, TE = 1.69 ms, inversion time = 1100 ms, FA = 7°, FOV = 256 mm × 256 mm, matrix size = 256 × 256, 176 continuous sagittal slices, ST = 1 mm, voxel size = 1 × 1 × 1 mm.
MRI preprocessing
We preprocessed the brain images using SPM12 (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/). The functional images were first realigned to the first image in the first run, generating six rigid head motion parameters for each time point and a mean functional image across all the runs. We then used the field maps to calculate the voxel displacement maps and coregistered them to this mean functional image. Since we acquired the field maps between each pair of functional runs, we applied each voxel displacement map to its two closet functional runs to correct their geometric distortions. The resulting functional images were next normalized to the MNI space with the acquired T1-weighted image using the unified segmentation method. In the final step, we spatially smoothed the normalized functional images, and the full width at half maximum of the 3D Gaussian smoothing kernel was 4 mm.
First-level fMRI analysis
We performed the first-level analysis using SPM12 (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/). General linear models (GLMs) were built to predict each participant’s blood-oxygen-level-dependent (BOLD) signal. The GLMs in the primary analysis included six conditions: the task prompt and the countdown at the beginning of each block, the reference event and the target event in the external-perspective task, and the reference event and the target event in the internal-perspective task. The duration of the task prompt and the countdown were set as the duration of the actual presentation (i.e., 5s and 3s, respectively). The duration of the four event conditions was set as 0. The resulting boxcar and stick functions were convolved with a canonical hemodynamic response function (HRF). Head motion parameters and constant variables indicating each of the six runs were included as nuisance regressors. A high-pass filter with a cutoff of 128s was used to remove the low-frequency noise and slow signal drifts.
We used the parametric modulation method to investigate the two core temporal variables: sequence (i.e., the number of events between the reference event and the target event) and duration (i.e., the duration of the target events in terms of hours). The z-scores of these parameters across the events in each run were set as parameters modulating the target events in both external- and internal-perspective tasks. The option for orthogonalizing modulations in the SPM was turned off (Mumford et al., 2015).
As a sanity check, we investigated whether the parametric modulation method in this study can successfully detect the visual cortex whose activation should be modulated by the number of syllables of the visually presented event phrase. To this end, we built a stick function with the sticks located on the onsets of both reference and target events and the “height” of the stick as the z score of the number of syllables across the event phrases in each run. This stick function was convolved with a canonical HRF as an additional regressor involved in the GLM.
To detect the specific activations for external- and internal-perspective tasks, we directly contrasted the target event in the external-perspective task and the target event in the internal-perspective task. To investigate how the neural activation was modulated by each of the temporal information across different perspectives, we looked at its interaction effect with the task (i.e., contrast the effect between external-and internal-perspective tasks, i.e., contrast weights vector: 1, -1) and the main effect regardless of tasks (i.e., contrast weights vector: 1, 1).
To validate whether any significant effects were explained by the RT, we also built the same GLM incorporating trial-by-trial RT as a covariate. To do so, we created a stick function with the sticks positioned at the onset of the target events in both external- and internal-perspective tasks, and the height of each stick was set to the z-score of the corresponding RT. These sticks were convolved with a canonical HRF to serve as a regressor in the GLM.
To directly illustrate how the activation level varied with the sequential distance, we built an independent GLM with each distance (i.e., from 1 to 5) in each task as a separate condition (i.e., ten conditions in total). To directly illustrate how the activation level varied with duration, we built another GLM with each duration (i.e., 0.5, 1, and 2 hours) in each task as a separate condition (i.e., six conditions in total). In both GLMs, we involved the task prompt and the countdown as conditions of no interest. The duration of the task prompt and the countdown were set as the duration of the actual presentation (i.e., 5s and 3s, respectively). The duration of all the other conditions was set as 0. The resulting boxcar and stick functions were convolved with the canonical HRF. Head motion parameters and constant variables indicating each of the six runs were included as nuisance regressors. A high-pass filter with a cutoff of 128s was used to remove the low-frequency noise and slow signal drifts.
Second-level fMRI analysis
We performed the group-level one-sample test on the first-level beta images from the univariate contrast and parametric modulation analyses. The statistical inference was performed using the permutation method with PALM toolbox (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/PALM). Five thousand sign flips were performed (Winkler et al., 2014), and a generalized Pareto distribution was fit to model the tail of the permutation distribution for the P values below 0.1 (Knijnenburg et al., 2009; Winkler et al., 2016). We controlled the family-wise error rate using a conventional cluster-forming threshold (i.e., voxel-wise p < 0.001, cluster-level FWE corrected p < 0.05) (Woo et al., 2014). Given that the hippocampus is our primary region of interest, its elongated and thin shape limits the number of contiguous voxels, restricting the formation of large clusters, and functionally independent clusters within the hippocampus may naturally be small, we performed multiple comparison corrections separately for the cortex and the hippocampus. We used the Automated Anatomical Labelling Atlas 3 to define these masks (Rolls et al., 2020). The cortical mask was defined as all the cortical regions in both hemispheres combined, and the hippocampal mask was defined as the hippocampus in both hemispheres combined (i.e., the No.41 and the No. 42 areas combined).
Figure supplements

All participants learnt the same event structure.
However, the event order differed across participants (i.e., two event lists were used for even- and odd-numbered participants). This approach helps minimize potential confounding between the temporal and semantic information carried by the events.

Reaction time analysis including Parts of the Day.
For diagnostic purposes, we plotted the partial residuals of each significant predictor significantly influencing the reaction time (RT) of the corrected trials. The partial residual includes the effect of each variable, its interaction with Task Type, and the residuals from the full linear regression model. A. RT increased with Syllable Length, showing a similar trend across both tasks. B. Sequential Distance affected RT in opposite directions depending on perspectives. C. Event Duration influenced RT only in the external-perspective task, with no effect in the internal-perspective task. D. The effect of Parts of Day was in line with the effect of Sequential Distance, with two events in the same and different parts of the day corresponding to the short and long sequential distances, respectively.

(A) Brain surface view of the univariate contrast between external-perspective and internal-perspective tasks (voxel-level p < 0.001, cluster-level FWE corrected p < 0.05). This view is transformed from the significant clusters in the MNI space in Figure 3A to the fsLR space using the toolbox neuromaps (https://github.com/netneurolab/neuromaps). All the significant areas were in the right hemisphere. (B) Brain surface view of the default network A and the default network B. They were respectively identified as the 8th and the 1st components among the 25 components of the “group-ICA” template from the UK Biobank brain imaging (https://www.fmrib.ox.ac.uk/ukbiobank/). We preserved the positive voxels above 7 in the MNI space and transformed them into the fsLR space using the toolbox neuromaps. Both plots are illustrated using the Connectome Workbench 2.0 (https://www.humanconnectome.org/software/connectome-workbench). They are displayed on an inflated surface against the group-averaged all sulcus image of 1096 young adults from the dataset of the Human Connectome Project (https://balsa.wustl.edu/reference/pkXDZ).
Data availability
Figure 2-source data 1, Figure 3-source data 1, Figure 4-source data 1, and Figure 5-source data 1 contain the data used to generate the corresponding figures.
Acknowledgements
This work was supported by the European Research Council (ERC-StG, NOAM 804422) and the Italian Ministry of University and Research (MUR-FARE, MODGET R18WJMSNZF) attributed to R.B. We thank Mattia Silvestri for the help with data collection and Simone Viganò for the discussion.
Additional files
Additional information
Funding
EC | European Research Council (ERC) (804422)
Roberto Bottini
Italian Ministry of University and Research (R18WJMSNZF)
Roberto Bottini
References
- Finding the answer in space: The mental whiteboard hypothesis on serial order in working memoryFrontiers in Human Neuroscience 8:932Google Scholar
- Topographical disorientation: a synthesis and taxonomyBrain 122:1613–1628Google Scholar
- Encoding of spatial location by posterior parietal neuronsScience 230:456–458Google Scholar
- Multimodal representation of space in the posterior parietal cortex and its use in planning movementsAnnual Review of Neuroscience 20:303–330Google Scholar
- Self-agency and self-ownership in cognitive mappingTrends in Cognitive Sciences 23:476–487Google Scholar
- The mental time line: An analogue of the mental number line in the mapping of life eventsConsciousness and Cognition 18:781–785Google Scholar
- Subjective mental time: the functional architecture of projecting the self to past and futureEuropean Journal of Neuroscience 30:2009–2017Google Scholar
- Self in time: imagined self-location influences neural activity related to mental time travelJournal of Neuroscience 28:6502–6507Google Scholar
- Seelenlähmung des “Schauens”, optische Ataxie, räumliche Störung der Aufmerksamkeit. pp. 51–66European Neurology 25:51–66Google Scholar
- The human hippocampus is sensitive to the durations of events and intervals within a sequenceNeuropsychologia 64:1–12Google Scholar
- Random effects structure for confirmatory hypothesis testing: Keep it maximalJournal of Memory and Language 68:255–278Google Scholar
- Fitting linear mixed-effects models using lme4Journal of Statistical Software 67:1–48Google Scholar
- Mapping sequence structure in the human lateral entorhinal cortexeLife 8:e45333https://doi.org/10.7554/eLife.45333Google Scholar
- Mnemonic construction and representation of temporal structure in the hippocampal formationNature Communications 13:3395Google Scholar
- Sequence memory in the hippocampal–entorhinal regionJournal of Cognitive Neuroscience 32:2056–2070Google Scholar
- Mapping spatial frames of reference onto time: A review of theoretical accounts and empirical findingsCognition 132:342–382Google Scholar
- Parietal lobe and episodic memory: bilateral damage causes impaired free recall of autobiographical memoryJournal of Neuroscience 27:14415–14423Google Scholar
- A neural-level model of spatial memory and imageryeLife 7:e33752https://doi.org/10.7554/eLife.33752Google Scholar
- Neuronal vector coding in spatial cognitionNature Reviews Neuroscience 21:453–470Google Scholar
- The out-of-body experience: disturbed self-processing at the temporo-parietal junctionThe Neuroscientist 11:16–24Google Scholar
- Out-of-body experience and autoscopy of neurological originBrain 127:243–258Google Scholar
- Knowledge across reference frames: Cognitive maps and image spacesTrends in Cognitive Sciences 24:606–619Google Scholar
- Parallel interdigitated distributed networks within the individual estimated by intrinsic functional connectivityNeuron 95:457–471Google Scholar
- Self-projection and the brainTrends in Cognitive Sciences 11:49–57Google Scholar
- Your brain is a time machine: The neuroscience and physics of timeWW Norton & Company Google Scholar
- Population clocks: motor timing with neural dynamicsTrends in Cognitive Sciences 14:520–527Google Scholar
- Space and time: the hippocampus as a sequence generatorTrends in Cognitive Sciences 22:853–869Google Scholar
- Remembering the past and imagining the future: a neural model of spatial memory and imageryPsychological Review 114:340Google Scholar
- The parietal cortex and episodic memory: an attentional accountNature Reviews Neuroscience 9:613–625Google Scholar
- The human inferior parietal lobule in stereotaxic spaceBrain Structure and Function 212:481–495Google Scholar
- The human inferior parietal cortex: cytoarchitectonic parcellation and interindividual variabilityNeuroimage 33:430–448Google Scholar
- The enigma of Bálint’s syndrome: neural substrates and cognitive deficitsFrontiers in Human Neuroscience 8:123Google Scholar
- Top-down and bottom-up attention to memory: a hypothesis (AtoM) on the role of the posterior parietal cortex in memory retrievalNeuropsychologia 46:1828–1851Google Scholar
- A common reference frame for movement plans in the posterior parietal cortexNature Reviews Neuroscience 3:553–562Google Scholar
- Time regained: how the human brain constructs memory for timeCurrent Opinion in Behavioral Sciences 17:169–177Google Scholar
- Memory hierarchies map onto the hippocampal long axis in humansNature Neuroscience 18:1562–1564Google Scholar
- Neural patterns in parietal cortex and hippocampus distinguish retrieval of start versus end positions in working memoryJournal of Cognitive Neuroscience 34:1230–1245Google Scholar
- How movements shape the perception of timeTrends in Cognitive Sciences 25:950–963Google Scholar
- Learning and remembering real-world events after medial temporal lobe damageProceedings of the National Academy of Sciences 113:13480–13485Google Scholar
- An event map of memory space in the hippocampuseLife 5:e16534https://doi.org/10.7554/eLife.16534Google Scholar
- Parallel distributed networks dissociate episodic and social functions within the individualJournal of Neurophysiology 123:1144–1179Google Scholar
- Disentangling reference frames in the neural compassImaging Neuroscience 2:1–18Google Scholar
- Time cells in the hippocampus: a new dimension for mapping memoriesNature Reviews Neuroscience 15:732–744Google Scholar
- On the integration of space, time, and memoryNeuron 95:1007–1018Google Scholar
- A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging dataNeuroimage 25:1325–1335Google Scholar
- Broad domain generality in focal regions of frontal and parietal cortexProceedings of the National Academy of Sciences 110:16616–16621Google Scholar
- Are all spatial reference frames egocentric? Reinterpreting evidence for allocentric, object-centered, or world-centered reference framesFrontiers in Human Neuroscience 9:648Google Scholar
- Cognitive mapping in mental time travel and mental space navigationCognition 154:55–68Google Scholar
- Time is not space: core computations and domain-specific networks for mental travelsJournal of Neuroscience 36:11891–11903Google Scholar
- Building the arrow of time… over time: a sequence of brain activity mapping imagined events in time and spaceCerebral Cortex 29:4398–4414Google Scholar
- Hippocampal contribution to ordinal psychological time in the human brainJournal of Cognitive Neuroscience 32:2071–2086Google Scholar
- A map of abstract relational knowledge in the human hippocampal–entorhinal cortexeLife 6:e17086https://doi.org/10.7554/eLife.17086Google Scholar
- Disturbances of spatial orientation and visual attention, with loss of stereoscopic visionArchives of Neurology & Psychiatry 1:385–407Google Scholar
- Posterior parietal cortex and episodic retrieval: convergent and divergent effects of attention and memoryLearning & Memory 16:343–356Google Scholar
- Comparison of spatial firing characteristics of units in dorsal and ventral hippocampus of the ratJournal of Neuroscience 14:7347–7356Google Scholar
- Global spatial disorientation: clinico-pathologic correlationsJournal of the Neurological Sciences 34:267–278Google Scholar
- Finite scale of spatial representation in the hippocampusScience 321:140–1Google Scholar
- Fewer permutations, more accurate P-valuesBioinformatics 25:i161–i168Google Scholar
- Visual images preserve metric spatial information: evidence from studies of image scanningJournal of Experimental Psychology: Human Perception and Performance 4:47Google Scholar
- Conceptual metaphor in everyday languageJournal of Philosophy 77:453–486Google Scholar
- The hippocampus contributes to temporal duration memory in the context of event sequences: A cross-species perspectiveNeuropsychologia 137:107300Google Scholar
- Two visual systems in mental imagery: Dissociation of “what” and “where” in imagery disorders due to bilateral posterior cerebral lesionsNeurology 35:1010–1010Google Scholar
- Fine subdivisions of the semantic network supporting social and sensory–motor semantic processingCerebral Cortex 28:2699–2710Google Scholar
- Balancing Type I error and power in linear mixed modelsJournal of Memory and Language 94:305–315Google Scholar
- The unreality of timeMind :457–474Google Scholar
- The hippocampus dissociates present from past and future goalsNature Communications 15:4815Google Scholar
- Mental comparison and the symbolic distance effectCognitive Psychology 8:228–246Google Scholar
- Time required for judgements of numerical inequalityNature 215:1519–1520Google Scholar
- The hippocampus as a cognitive graphThe Journal of General Physiology 107:663Google Scholar
- Orthogonalization of regressors in fMRI modelsPLOS One 10:e0126255Google Scholar
- The response time paradox in functional magnetic resonance imaging analysesNature Human Behaviour 8:349–360Google Scholar
- The neural correlates of time: a meta-analysis of neuroimaging studiesJournal of Cognitive Neuroscience 31:1796–1826Google Scholar
- The tangle of space and time in human cognitionTrends in Cognitive Sciences 17:220–229Google Scholar
- The hippocampus as a cognitive mapOxford University Press Google Scholar
- A common cortical metric for spatial, temporal, and social distanceJournal of Neuroscience 34:1979–1987Google Scholar
- Brain system for mental orientation in space, time, and personProceedings of the National Academy of Sciences 112:11072–11077Google Scholar
- Long-axis specialization of the human hippocampusTrends in Cognitive sciences 17:230–240Google Scholar
- Methods to detect, characterize, and remove motion artifact in resting state fMRINeuroimage 84:320–341Google Scholar
- Chronotopic maps in human supplementary motor areaPLOS Biology 17:e3000026Google Scholar
- Dissociating distinct cortical networks associated with subregions of the human medial temporal lobe using precision neuroimagingNeuron 111:2756–2772Google Scholar
- Cytoarchitectonic segregation of human posterior intraparietal and adjacent parieto-occipital sulcus and its relation to visuomotor and cognitive functionsCerebral Cortex 29:1305–1327Google Scholar
- Psychoanatomical substrates of Balint’s syndromeJournal of Neurology, Neurosurgery & Psychiatry 72:162–178Google Scholar
- Lost in time: Relocating the perception of duration outside the brainNeuroscience & Biobehavioral Reviews 105312Google Scholar
- The comparative study of mental time travelTrends in Cognitive Sciences 13:271–277Google Scholar
- Details, gist and schema: hippocampal–neocortical interactions underlying recent and remote episodic and spatial memoryCurrent Opinion in Behavioral Sciences 17:114–123Google Scholar
- Automated anatomical labelling atlas 3Neuroimage 206:116189Google Scholar
- Attention to memory and the environment: functional specialization and dynamic competition in human posterior parietal cortexJournal of Neuroscience 30:8445–8456Google Scholar
- The contribution of the human posterior parietal cortex to episodic memoryNature Reviews Neuroscience 18:183–192Google Scholar
- Perceptual illusion of rotation of three-dimensional objectsScience 191:952–954Google Scholar
- The development of spatial representations of large-scale environmentsAdvances in Child Development and Behavior 10:9–55Google Scholar
- Right supramarginal gyrus is crucial to overcome emotional egocentricity bias in social judgmentsJournal of Neuroscience 33:15466–15476Google Scholar
- Time, space, and events in language and cognition: a comparative viewAnnals of the New York Academy of Sciences 1326:72–81Google Scholar
- Impairment of an egocentric map of locations: Implications for perception and actionCognitive Neuropsychology 13:481–524Google Scholar
- The time machine in our mindCognitive Science 36:385–420Google Scholar
- The theory of cognitive spacetimeMetaphor and Symbol 29:71–93Google Scholar
- Functional organization of the hippocampal longitudinal axisNature Reviews Neuroscience 15:655–669Google Scholar
- Mental time travel and the shaping of the human mindPhilosophical Transactions of the Royal Society B: Biological Sciences 364:1317–1324Google Scholar
- Structure learning and the posterior parietal cortexProgress in Neurobiology 184:101717Google Scholar
- Multivoxel pattern similarity suggests the integration of temporal duration in hippocampal event sequence representationsNeuroImage 178:136–146Google Scholar
- The hippocampus encodes distances in multidimensional feature spaceCurrent Biology 29:1226–1Google Scholar
- On the expression of spatio-temporal relations in languageUniversals of Human Language 3:369–400Google Scholar
- The neural bases for timing of durationsNature Reviews Neuroscience 23:646–665Google Scholar
- Precis of elements of episodic memoryBehavioral and Brain Sciences 7:223–238Google Scholar
- Episodic memory: From mind to brainAnnual review of psychology 53:1–25Google Scholar
- Thinking tools: Gestures change thought about timeTopics in Cognitive Science 13:750–776Google Scholar
- Mental search of concepts is supported by egocentric vector representations and restructured grid mapsNature Communications 14:8132Google Scholar
- Neural correlates of the first-person-perspectiveTrends in Cognitive Sciences 7:38–42Google Scholar
- Parietal lobe contributions to episodic memory retrievalTrends in Cognitive Sciences 9:445–453Google Scholar
- Posterior parietal cortexCurrent Biology 27:R691–R695Google Scholar
- Faster permutation inference in brain imagingNeuroimage 141:502–516Google Scholar
- Permutation inference for the general linear modelNeuroimage 92:381–397Google Scholar
- The inner sense of time: how the brain creates a representation of durationNature Reviews Neuroscience 14:217–223Google Scholar
- Cluster-extent based thresholding in fMRI analyses: pitfalls and recommendationsNeuroimage 91:412–419Google Scholar
- Event structure in perception and conceptionPsychological Bulletin 127:3Google Scholar
- A back-propagation programmed network that simulates response properties of a subset of posterior parietal neuronsNature 331:679–684Google Scholar
Article and author information
Author information
Version history
- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.107273. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2025, Xu et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 395
- downloads
- 29
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.