1. Neuroscience
Download icon

Processing of different spatial scales in the human brain

  1. Michael Peer  Is a corresponding author
  2. Yorai Ron
  3. Rotem Monsa
  4. Shahar Arzy
  1. Hebrew University of Jerusalem, Israel
  2. Hadassah Hebrew University Medical School, Israel
  3. University of Pennsylvania, United States
Research Article
  • Cited 1
  • Views 819
  • Annotations
Cite this article as: eLife 2019;8:e47492 doi: 10.7554/eLife.47492

Abstract

Humans navigate across a range of spatial scales, from rooms to continents, but the brain systems underlying spatial cognition are usually investigated only in small-scale environments. Do the same brain systems represent and process larger spaces? Here we asked subjects to compare distances between real-world items at six different spatial scales (room, building, neighborhood, city, country, continent) under functional MRI. Cortical activity showed a gradual progression from small to large scale processing, along three gradients extending anteriorly from the parahippocampal place area (PPA), retrosplenial complex (RSC) and occipital place area (OPA), and along the hippocampus posterior-anterior axis. Each of the cortical gradients overlapped with the visual system posteriorly and the default-mode network (DMN) anteriorly. These results suggest a progression from concrete to abstract processing with increasing spatial scale, and offer a new organizational framework for the brain’s spatial system, that may also apply to conceptual spaces beyond the spatial domain.

https://doi.org/10.7554/eLife.47492.001

Introduction

Over the past few decades, research of the brain’s spatial system advanced tremendously, providing insights into how the brain represents complex information and how these processes are impaired in disease states (e.g. Banino et al., 2018; Kunz et al., 2015; for reviews see Buzsáki and Moser, 2013; Epstein et al., 2017; Moser et al., 2008). However, scientific investigations of spatial cognition in humans and animals are often limited to small scale environments such as single rooms or short walkable pathways. It is therefore unclear whether representation and processing of large-scale environments rely on the same neurocognitive systems (Wolbers and Wiener, 2014). This question is of importance for several reasons. First, the lack of knowledge on how the brain’s spatial system treats different spatial scales affects interpretation of past investigations that used different types of experimental environments. Second, disorientation is a prevalent symptom across neurological and psychiatric disorders, but remains poorly understood and diagnosed, in part because it may have different subtypes that manifest at different spatial scales (Peer et al., 2014). Finally, recent findings suggest that the brain’s spatial system is also used to represent conceptual knowledge (Behrens et al., 2018; Bellmund et al., 2018; Constantinescu et al., 2016; Gärdenfors, 2000). Since large-scale environments are often remembered in a schematic manner not consistent with Euclidean geometry (McNamara, 1986; Moar and Bower, 1983; Tversky, 1981), understanding their representation may provide clues to representation of abstract domains.

Previous neuroscientific evidence supports the idea that the brain’s spatial representations are not unified but separated into multiple scales. Functional MRI studies in humans demonstrated that locations within rooms and their surrounding buildings are coded in different cortical regions (Kim and Maguire, 2018), and that directions are represented in the retrosplenial complex with respect to the local axis of a room irrespective of its large-scale context (Marchette et al., 2014). Electrophysiological evidence in animals also points to separate representation of small scale regions and their large-scale context, as grid- and place-cells within the medial temporal lobe undergo remapping when crossing borders between rooms (Fyhn et al., 2007; Skaggs and McNaughton, 1998; Tanila, 1999), and form independent representations of different segments of the environment (Derdikman et al., 2009; Derdikman and Moser, 2010; Paz-Villagrán et al., 2004; Spiers et al., 2015). Recordings from the rat retrosplenial cortex also demonstrate coding of location both in the immediate small-scale region and in the large-scale surrounding environment (Alexander and Nitz, 2017; Alexander and Nitz, 2015). Finally, evidence from patients with disorientation disorders shows that disorientation can be limited to a specific spatial scale according to the underlying lesion (Peer et al., 2014). Patients with lateral parietal cortex lesions are impaired in navigating their immediate, small-scale environment (‘egocentric disorientation’; Aguirre and D'Esposito, 1999; Stark, 1996; Wilson et al., 2005). In contrast, patients with retrosplenial lesions (Aguirre and D'Esposito, 1999; Takahashi et al., 1997) and Alzheimer’s disease (Monacelli et al., 2003; Peters-Founshtein et al., 2018) show the opposite pattern – correct localization in the immediately visible environment but inability to navigate in the larger unseen environment. Despite this evidence, few neuroscientific studies directly contrasted between representation of different scales of space. Several studies indicated a posterior-to-anterior progression from small to large scales along the hippocampal axis, manifested as larger spatial receptive fields, in both humans and animals (Brunec et al., 2018; Kjelstrup et al., 2008; Poppenk et al., 2013). However, these investigations only used routes ranging up to several meters, and focused only on the hippocampus and not on the rest of the brain’s spatial system. Another fMRI study contrasted coarse- and fine-grained spatial judgments in one scale (city), finding increased hippocampal activity for fine-grained distinctions (Hirshhorn et al., 2012a). In the current work, we sought to characterize human brain activity under ecological experimental settings, across a large range of spatial scales, when directly manipulating only the parameter of spatial scale. To this aim, we asked subjects to compare distances between real-world, personally familiar locations across six spatial scales (rooms, buildings, neighborhoods, cities, countries and continents; Figure 1), under functional MRI, and looked for differences in brain response for the different scales.

Figure 1 with 2 supplements see all
Study design and stimuli.

(A) The design of the study. In each block, subjects viewed one target item in a specific scale and location, and then performed four proximity comparisons for pairs of other items from the same location. All stimuli were provided by the subjects from locations personally familiar to them, and target and comparison items were chosen randomly from the subject’s stimulus set. (B) Examples of stimuli (subject-provided locations and items) in each spatial scale.

https://doi.org/10.7554/eLife.47492.002

Results

Posterior-anterior gradients of spatial scale selectivity

To investigate spatial scale-selective activity, we looked for voxels showing difference in response to task performance at the different scales, and characterized their gradual response profiles by fitting a Gaussian function to the beta value graphs at each voxel (Figure 2—figure supplement 1). This analysis identified three cortical regions that displayed a continuous gradual shift in spatial scale selectivity: the medial temporal cortex, medial parietal cortex and lateral parieto-occipital cortex (Figure 2A–D, Figure 2—figure supplement 2). Activity in these regions displayed a gradual shift from selectivity for the smallest spatial scales (room, building) in their posterior parts, followed by selectivity for medium scales (neighborhood, city) more anteriorly, and for the largest scales (country, continent) in the most anterior part of each gradient (Figure 2E; p<0.001 for all gradients, permutation test on linear fit slope, FDR-corrected). The three scale-selective gradients were symmetric across the two hemispheres. Extraction of the scale with maximal response from each voxel (while disregarding the pattern of activity at other scales) also demonstrated posterior-to-anterior progression along the three abovementioned gradients (Figure 2E, Figure 2—figure supplement 3; p<0.001 for all gradients, permutation test on linear fit slope, FDR-corrected). To further characterize the scale selectivity of each region, we plotted the event-related activity and beta values for each spatial scale at each part of the three gradients. Results showed the same gradual posterior-anterior shift from small to large spatial scales, with each part of the gradient having a preferred scale and gradually diminishing activity to other scales around it (Figure 2—figure supplement 4A–C). Finally, in light of previous findings of spatial scale selectivity changes along the hippocampal long axis (Brunec et al., 2018; Poppenk et al., 2013), we measured average spatial scale selectivity along the hippocampus. Activity shifted from small to large scales along the posterior-anterior axis of the hippocampus (Figure 2E; p<0.001 for average position of Gaussian fit peak, permutation test on linear fit slope, FDR-corrected). Using the same analysis at the individual subject level, 16 of 19 subjects showed significant increase in preferred scale along the lateral parietal gradient, 17 of 19 along the medial temporal gradient, 17 of 19 along the medial parietal gradient, and 6 of 19 along the hippocampus (all p<0.05, permutation test on linear fit slope, FDR-corrected).

Figure 2 with 5 supplements see all
Small to large spatial scales preferentially activate regions along continuous posterior-anterior gradients.

Three cortical gradients were observed demonstrating a continuous shift in spatial scale selectivity. Within each gradient, posterior regions were selectively active for smaller spatial scales, and anterior ones for larger spatial scales. Colors indicate Gaussian fit peak scale position (voxels identified by ANOVA across beta values, p<0.01, FDR-corrected for multiple comparisons, minimum r2 of fit = 0.7). (A) Medial parietal gradient, (B) Medial temporal gradient, (C) lateral occipito-parietal gradient. (D) 3D visualization of the two medial gradients (gradients marked by dashed arrows, other activations not shown). (E) change in average spatial scale selectivity along the posterior-anterior axis of each gradient and along the hippocampal long axis (X axis represents MNI coordinates from posterior to anterior, blue – average position of a Gaussian fit peak for all scale-sensitive voxels at each coordinate, red – average position of scale with maximum activity for all scale-sensitive voxels at each coordinate). RH – right hemisphere, LH – left hemisphere. Full volume maps of these results are available online at https://github.com/CompuNeuroPsychiatryLabEinKerem/publications_data/tree/master/spatial_scales (Peer et al., 2019, copy archived at https://github.com/elifesciences-publications/publications_data).

https://doi.org/10.7554/eLife.47492.005

In addition to the continuous gradients, several other brain regions displayed scale-specific activity not organized as a continuous gradient (Figure 3Supplementary file 1). Clusters of activity at the supramarginal gyrus, posterior temporal cortex, superior frontal gyrus and dorsal precuneus displayed the highest activity levels for the smallest spatial scales (room and building), and their activity gradually diminished for larger scales (Figure 2—figure supplement 4D). In contrast, the lateral occipital cortex and the anterior medial prefrontal cortex clusters displayed the opposite pattern of higher activity for the largest spatial scales (city, country and continent), and gradually decreasing activity for the smaller scales (Figure 2—figure supplement 4D).

Scale-selective activity along gradients and additional cortical regions.

Surface view of all scale-selective cortical activations (including regions outside of the three gradients; voxels identified by ANOVA across beta values, p<0.01, FDR-corrected for multiple comparisons, minimum r2 of fit = 0.7, cluster threshold = 15 mm2). Continuous scale-sensitive gradients are marked by white dashed lines. Full volume maps of these results are available online at https://github.com/CompuNeuroPsychiatryLabEinKerem/publications_data/tree/master/spatial_scales (Peer et al., 2019, copy archived at https://github.com/elifesciences-publications/publications_data).

https://doi.org/10.7554/eLife.47492.011

The three cortical scale-selective gradients extend anteriorly from scene-responsive cortical regions

The three cortical gradients identified by our analyses are located in close proximity to known scene-responsive cortical regions – parahippocampal place area (PPA), retrosplenial complex (RSC) and occipital place area (OPA) (Epstein et al., 2017). To test the exact locations of these regions with respect to our findings, we used masks of these regions as previously defined on an independent sample (Julian et al., 2012). The three regions (PPA, RSC and OPA) were found to be situated at the posterior part of the medial temporal, medial parietal and lateral occipito-parietal gradients, respectively. Accordingly, the scene-responsive regions were most active for the small and medium scales: room, building and neighborhood (Figure 4). This finding suggests their stronger involvement in the processing of immediate visible scenes, compared to more abstract larger environments. However, these regions also showed activity for the larger scales, suggesting that their computational role may extend beyond the exclusive processing of the immediately visible environment, though to a lesser extent (Figure 4).

Visual scene-responsive cortical regions (PPA, RSC and OPA) are preferentially active for small to medium spatial scales.

Scene-responsive cortical regions (marked by a black outline) were defined using publicly available dataset by responses to a places > objects contrast in a separate subject sample (Julian et al., 2012). (A) retrosplenial complex (RSC), (B) parahippocampal place area (PPA), (C) occipital place area (OPA). Left – overlap of scene-responsive regions and the three scale-sensitive gradients. Right– average beta weights for each condition (scale) within each region of interest (error bars represent standard errors across subjects). The visual scene-responsive regions are situated at the posterior part of the three gradients, and are therefore mostly active during processing of small to medium scale environments.

https://doi.org/10.7554/eLife.47492.012

The three cortical gradients indicate a shift between the visual and default-mode brain networks

To relate the three cortical gradients to large-scale brain organization, we compared their anatomical distribution to a parcellation of the brain into seven cortical resting-state fMRI networks, as identified in data from 1000 subjects (Yeo et al., 2011). Across the three gradients, the posterior regions (related to processing of small scales) overlapped mainly with the visual network, while the anterior regions (related to processing of large scales) mainly overlapped with the default-mode network (Supplementary file 1).

Differences in scale selectivity between the three cortical gradients

The previous analyses identified three cortical regions with gradual progression of scale selectivity. We next attempted to identify differences between these three regions that may be indicative of their functions. To this aim, we analyzed the number of voxels with preferential activity for each scale within each gradient (Figure 5, Figure 5—figure supplement 1). The medial parietal gradient was mostly active for the neighborhood, city and continent scales, indicating a role for this region in processing medium to large scale environments. In contrast, the medial temporal gradient contained mostly voxels sensitive to scales up to the city level, suggesting that this region is involved mostly in processing small to medium scales. Finally, the lateral occipito-parietal gradient was most active for the smallest scales (room, building) and the largest (continent) scale. These findings demonstrate that despite their similar posterior-anterior organization, the three scale-sensitive cortical gradients have different scale preferences, indicating possible different spatial processing functions.

Figure 5 with 1 supplement see all
The three scale-selective cortical gradients have different voxel distributions, demonstrating preference for processing different spatial scales.

The position of the Gaussian fit peak was used to identify voxels responsive to each scale. Voxel numbers are described within each gradient. Results indicate that the medial parietal gradient mostly represents scales at the neighborhood level and larger, the medial temporal gradient mostly represents environment up to the neighborhood-sized scales and has only small portions dedicated to larger scales, and the lateral occipito-parietal gradient is highly active both for the smallest scales and the largest ones.

https://doi.org/10.7554/eLife.47492.013

Subjects’ behavioral ratings and their relation to the scale effects

Analysis of subjects’ ratings of emotional significance and task difficulty for each location indicated no significant differences between scales, except for difficulty difference between the continent and the room and neighborhood scales (Figure 1—figure supplement 2A–B; correlation between difficulty and scale, r = 0.39; p<0.05, two-tailed one-sample t-test across subjects). Familiarity ratings did significantly differ across scales, with larger average familiarity for the smaller scale environments (Figure 1—figure supplement 2C; average correlation of familiarity and scale increase, r = −0.72; p<0.05, two-tailed one-sample t-test across subjects). First-person perspective taking and third-person perspective taking ratings were also highly correlated with scale increase, indicating a gradual shift between imagination of locations from a ground-level view in small-scale environments to imagination from a bird’s-eye view in large-scale environments (r = −0.81, r = 0.80, respectively; both p<0.05, two-tailed one-sample t-test across subjects; Figure 1—figure supplement 2E, Supplementary file 1). Response times did not significantly differ between scales (Figure 1—figure supplement 2D). The verbal descriptions of task-solving strategy confirmed the trend of decrease in ground-level and increase in map-like (or ‘bird’s-eye’) imagination with increasing scale (Supplementary file 1). These descriptions also demonstrated that as the scale decreased, subjects increasingly relied on estimations of walking or driving times between locations, except for the room scale where this strategy was not used (Supplementary file 1).

To measure the effect of these different factors on the observed activations, we used parametric modulation using subjects’ ratings of emotion, familiarity, difficulty, perspective taking and strategy. The familiarity, perspective taking (first-person and third-person) and reports of use of a map strategy showed significant effects inside the scale-related gradients, in accordance with their high correlation to spatial scale (Figure 2—figure supplement 5). No other factor showed any significantly active regions in this analysis.

We next contrasted the activity for the experimental task with that for the lexical control task at each region. Within the three gradients, this contrast revealed significantly higher activity for the spatial task compared to the lexical control task (GLM contrast, all p-values<0.05, FDR corrected for multiple comparisons across regions), except for the anterior city, country- and continent-related regions in the medial temporal gradient and the continent region in the occipito-parietal gradient. Among the other scale-sensitive regions outside of the gradients, only the supramarginal and lateral occipital cortex clusters did not show a significant activity above that of the lexical control task.

Discussion

Our investigation revealed several novel findings. First, spatial scale sensitivity was found to be organized along three cortical gradients, extending anteriorly from the three known scene-responsive regions (PPA, RSC and OPA), as well as along the long axis of the hippocampus. These gradients were organized such that their posterior parts were most active for the smallest spatial scales and their anterior parts for the largest spatial scales. In addition, the posterior parts of the cortical gradients overlapped with the brain’s visual network, while the anterior parts extended into the default-mode network (DMN). Spatial scale sensitivity was differentially distributed between these gradients, with the medial temporal gradient preferentially active for small- to medium-scale environments, the medial parietal gradient for medium- to large-scale environments, and the lateral occipito-parietal gradient for the small and large scales but not for medium-sized ones. These scale-selective gradients were correlated with a shift from detailed to less-detailed knowledge of locations, and from first- to third-person perspective taking with increasing scale. In the following, we discuss our results with respect to previous theories of spatial processing as well as findings regarding the spatial system’s organization and its role in conceptual processing.

Several theories on how the cognitive system processes different spatial scales have been previously proposed. Early authors have suggested a scale-independent unitary system for spatial representation, such as an hierarchical tree that stores relations between segments at each hierarchical level, irrespective of its scale (Hirtle and Jonides, 1985; Holding, 1994; Worden, 1992). In contrast, other authors have suggested that different neurocognitive system are responsible for representation of different spatial scales. According to dual systems theories, local room-sized environment are stored using a precise metric reference frame, and larger environments are represented as a schematic, non-metric graph connecting these smaller environments (Meilinger, 2008; Werner et al., 2000). Finally, multiple systems theories claim that separate systems process different spatial scales. The different suggested scales include figural/graphics spaces (smaller than the body), vista spaces (small environments that can be grasped from one location), navigation/environmental spaces (large spaces learned through navigation), and geographical spaces (regions too large to be learned by navigation, and are learned mainly through maps) (Montello, 1993; Tversky, 2003). Consequently, the different types of theories offer different predictions on the type of brain activity involved in computations at different scales. While dual and multiple systems theories would predict activation at different brain regions for different spatial scales, the unitary system theory would predict activity within same brain regions across scales. Our findings showed that all scale-sensitive regions are active across a range of spatial scales, with activity shifting along functional gradients within the same brain regions. Therefore, our findings seem to reconcile the different theories, showing a unitary system that is involved in spatial processing across a range of spatial scales, while nevertheless having an internal organization according to scale.

Several factors might explain the shift in cortical activity when subjects make judgments at different scales. One element that may differ between scales is the amount of movement involved in their navigation and initial learning, although we did not find consistent differences between reports of imagined movement at different scales. Alternatively, subjects may use personally-relevant episodic memories to a different degree in order to perform the task at different scales, although the limited time allowed for each comparison and the lack of differences in emotional significance ratings between locations limit this possibility. Other potential contributing factors include differences in the level of familiarity/detailed knowledge of locations between the different scales, and a shift in perspective taking between first- and third-person imagination. Subjects’ behavioral ratings and verbal descriptions show that when thinking of larger scales, subjects shift to use a bird’s-eye imaginary perspective and have less detailed knowledge of locations. Previous studies that directly manipulated familiarity and level of knowledge of locations (Epstein et al., 2007b; Epstein et al., 2007a; Hirshhorn et al., 2012b; Wolbers and Büchel, 2005) or first- vs. third-person perspective taking (Rosenbaum et al., 2004; Sherrill et al., 2013) found differences in activity within the OPA, RSC and PPA (generally higher activity for more well-known places and first-person perspective, as found here). However, these studies did not find activity shift to more anterior cortical regions for third-person perspective taking or less well-known locations, as shown in the scale-sensitive gradients described in this study. Therefore, level of familiarity and perspective taking might not entirely explain the observed gradients. These findings might be explained by the idea that posterior gradient regions contain detailed spatial information, supported by the visual system and acquired using a first-person perspective; as the scale increases, knowledge becomes less detailed and more abstract and schematic, supporting a bird’s-eye/map like imagination (Arzy and Schacter, 2019).

Past studies have found evidence for posterior-anterior subdivisions in PPA and RSC (Baldassano et al., 2016; Baldassano et al., 2013; Burles et al., 2018; Chrastil et al., 2018; Silson et al., 2019; Silson et al., 2016). Posterior parts of both regions were active during visual scene viewing and navigation, and were functionally connected to visual regions. In contrast, anterior regions were active during imagination and recall of relations between unseen parts of the larger environment, and were functionally connected to the anterior hippocampus and DMN. These findings were interpreted as evidence for two spatial systems: a posterior system involved in perceptual analysis and encoding of visual scenes, and an anterior system responsible for scene recall from memory (Baldassano et al., 2016; Burles et al., 2018; Chrastil et al., 2018). Our results provide several novel insights related to these past findings. First, instead of a binary distinction between two systems in scene-selective regions (PPA, RSC and OPA), we found a continuous gradient operating both within these regions and extending anteriorly from them. Second, all investigated conditions involved only recall of environments from memory, suggesting that posterior-anterior activity differences do not relate only to direct visual processing vs. scene memory. Instead, the scale of representation may be important for organizing activity along the posterior-anterior axis. Third, we found that the cortical posterior-anterior organization by spatial scale also exists along the hippocampal long axis, in agreement with past findings (Brunec et al., 2018; Kjelstrup et al., 2008). With respect to the hippocampus, hippocampal long axis organization was previously suggested to relate to the level of detail vs. abstractness of the representation, both in space and in other memory domains (Brunec et al., 2018; Poppenk et al., 2013). We hypothesize that the same principle of detailed vs. general-schematic representation applies to the scale-sensitive cortical gradients we identified. Indeed, behavioral works demonstrated that while humans represent small-scale environments in a precise, Euclidean manner, in larger environments they may be using a more flexible representation system (Meilinger, 2008; Wolbers and Wiener, 2014). This representation may take the form of a ‘cognitive graph’ that represents relations between locations topologically (Chrastil and Warren, 2014; Epstein, 2008; Meilinger, 2008; Warren et al., 2017), resulting in behavioral biases and navigational mistakes (Moar and Bower, 1983; Tversky, 1981). Thus, the general posterior-anterior organization of the spatial system may relate to precise relational encoding in posterior regions vs. a flexible, cognitive-graph-like representation of larger spaces in anterior regions.

Across the three cortical gradients, we found that posterior regions correspond to visual scene-processing regions (PPA, RSC and OPA), while anterior regions were part of the default-mode network (DMN), in accordance with previous findings (Baldassano et al., 2016; Baldassano et al., 2013; Chrastil et al., 2018). The RSC, PPA and OPA are considered to be regions specializing in spatial processing (Dilks et al., 2013; Epstein and Kanwisher, 1998; Epstein and Ward, 2010). In contrast, the DMN is active both during rest and across a variety of high-level, mostly self-referenced, cognitive processes, that extend beyond the spatial domain (Buckner et al., 2008; Buckner and Carroll, 2007; Peer et al., 2015; Simony et al., 2016; Spreng et al., 2009). Thus, the posterior-anterior gradients we identified might reflect a shift from representing visually observable spatial relations in small-scale spaces to representing more abstract relations in larger environments. Indeed, recent investigations suggested a general cortical organization scheme, where information gradually progresses from sensory regions to form high-level cognitive representations in the DMN (Huntenburg et al., 2018; Margulies et al., 2016). In a previous study, we found that posterior regions within medial parietal cortex specialize in processing spatial relations, while the regions anterior to them process temporal relations between events and social relations between people (Peer et al., 2015). Similarly, it has been shown that posterior RSC and hippocampus are more active for spatial judgments while regions anterior to them are active for general episodic memory (Hirshhorn et al., 2012a). Moreover, a posterior-anterior gradient was observed in studies of brain processing of different scales of time, when transitioning from small, immediately-perceivable temporal windows (single seconds) to larger windows (several minutes) that require integration across time (Baldassano et al., 2017; Chen et al., 2016; Hasson et al., 2008). The hippocampus and its posterior-anterior organization were also related in previous works to processing of both spatial and non-spatial knowledge (Eichenbaum, 2000), leading to suggestions that representation of conceptual knowledge relies on a geometric, spatial-like processing system (Behrens et al., 2018; Bellmund et al., 2018; Casasanto and Boroditsky, 2008; Gärdenfors, 2000; Liberman and Trope, 2008; Parkinson and Wheatley, 2013). Our findings suggest that also outside the hippocampus, the scene-selective RSC, PPA and OPA, which are usually studied in isolation within the field of spatial neuroscience, may combine with the DMN to form a generalized brain system for conceptual knowledge organization.

Besides the three gradients, several other bilateral cortical regions showed sensitivity to spatial scale. These regions included the superior frontal gyrus, supramarginal gyrus, posterior temporal cortex and dorsal precuneus, which had the highest activity for the smallest spatial scale (room) and decreased activity with increasing scales. These regions may be involved in processes that are preferentially involved in analysis of local environments, such as egocentric perspective taking, in accordance with our subjects’ reports (Figure 1—figure supplement 2, Supplementary file 1) and the parietal cortex’s role in egocentric processing of the immediately surrounding environment (Byrne et al., 2007; Wilson et al., 2005). In contrast, clusters at the lateral occipital and medial prefrontal cortices displayed the opposite pattern of maximal activity at large spatial scales and decreasing activity with decreasing scale. This pattern may reflect processes that are employed more at large scales, such as visualization of maps and routes that occurs when imagining large-scale spaces (Montello, 1993; Tversky, 2003), in accordance with subjects’ reports (Supplementary file 1). Future experiments may untangle the role of each of these activation clusters in small-scale and large-scale specific processing.

Our findings offer several new insights regarding the distinct roles of different parts of the brain’s spatial processing system. The medial parietal gradient, extending from the RSC, was found to be preferentially active for processing of large environments, ranging from neighborhoods to continents. Previous research has shown that the RSC is involved in locating places within their large-scale context, such as when pointing in the direction of far-away, unseen landmarks (Epstein, 2008; Epstein et al., 2007b; Maguire, 2001). Additionally, it was suggested to be related to representations of approximate relations between locations as a cognitive graph (Epstein, 2008; Epstein and Vass, 2014). Therefore, the RSC may be involved in locating places within their larger context across environments of different sizes. Interestingly, a recent meta-analysis demonstrated that the posterior part of the RSC/posterior cingulate cortex is active when directly viewing scenes while its more dorso-anterior part is active when locating items in larger unobservable environments, further supporting this interpretation and the gradients we identified (Burles et al., 2018). In a similar manner, the medial temporal gradient, extending from the PPA, was shown here to be responsive mostly for environments up to the neighborhood level. The PPA is classically known to be involved in location recognition and analysis of observed scene layouts (Epstein, 2008). Our findings suggest a role for the PPA in performing similar computations not only in directly visible scenes, but also in larger environments that can still be learned by experience. Finally, the lateral occipito-parietal gradient, extending from the OPA, was shown here to be primarily involved in processing room to building sized environments, but also to have an anterior extension involved in processing very large environments. The OPA is thought to be a perceptual processing system used for analyzing local geometry and identifying routes within visual scenes (Bonner and Epstein, 2018; Bonner and Epstein, 2017), and our findings suggest it may have similar functions in very large-scale spaces, possibly due to human tendency to visually imagine these scales as maps. Taken together, the anterior extension of the PPA, RSC and OPA suggest that they perform general computations across different environment sizes, beyond the immediately-visible scenes by which they are usually defined.

When looking at the overall distribution of spatial scale selective voxels across the brain, it is apparent that some spatial scales are more prominently represented than others. The smallest environments (rooms) were preferentially represented along large parts of the lateral parieto-temporal cortex, indicating their importance in everyday experience and behavior of the environment immediately surrounding us. Among the medium scales, neighborhoods showed the largest prominence and largest number of maximally active voxels along the three gradients. Regions in the size of neighborhoods may be the most directly relevant for everyday active navigation and social communication; indeed, most monkeys and apes have a territory size of up to 3 km2 (Lowen and Dunbar, 1994), suggesting that this is the scale that has been most relevant to navigation in primate (and possibly human) evolution. Finally, several regions in the lateral occipital and medial prefrontal lobes, as well as in the anterior parts of the three cortical gradients, showed prominent activity specifically at the largest spatial scale of continents. This specificity may be related to the increased abstractness of relations as perceived in these large environments, or to the use of mechanisms specifically involved in imagining very large spaces, such as their conception through maps (Montello, 1993; Tversky, 2003).

Activity in the hippocampus, and in some of the anterior parts of the cortical gradients, was negative relative to baseline, while showing consistent differences in activity between scales. The anterior parts of the three cortical gradients overlap with the DMN, which may be characterized by negative BOLD during tasks (Raichle et al., 2001), and negative BOLD in the hippocampus is also a common finding (Shipman and Astur, 2008). These negative activations were interpreted in the past as potentially reflecting high constitutive activity of these regions during rest more than during active tasks (Ekstrom, 2010; Shipman and Astur, 2008; Stark and Squire, 2001). The fact that these activations are below baseline preclude inference of whether these regions participate in processing of smaller spatial scales or are only active for larger ones.

Our study has several limitations. First, the task we used involved a specific cognitive computation of three-way distance comparison between locations, enabling direct comparison between scales using the same task and experimental design. However, experiments involving other tasks that can be applied across spatial scales may reveal additional information on scale-specific and scale-independent brain processes. Second, to obtain a large range of spatial scales and maintain ecological validity we used a personalized paradigm where subjects provided names of real-world locations familiar to them, in six naturalistic scales, therefore not controlling for the precise size and distances in each scale. Despite this restriction, the distances between subjects’ selected stimuli logarithmically increased with each scale, and a bilateral gradient organization was consistently observed across gradients. However, the exact relations between distances and scales may be further investigated in a more granular manner using studies of well-controlled (e.g. virtual) environments with different scales. Third, to identify the DMN and the known scene-selective cortical regions, we used group averages from large subject groups; directly identifying these systems at the single-subject level might yield more detailed measurements of their scale specificity. Fourth, we did not measure navigation, imagery or memory abilities, and therefore did not control for these factors in the group analyses; however, our results hold at the individual subject level in the large majority of subjects, limiting their ability to explain our results. Finally, as discussed above, environments at different scales may have inherent differences in their imagination and the strategies employed for judgments within them, such as imagination of walking, driving, flying or imagining them through maps. Although we cannot rule out these factors as affecting activation differences between scales, a shift between different strategies is not likely to explain a continuous shift in the location of activity along cortical gradients with a change in spatial scale, as we observed here.

In conclusion, our results demonstrate the extension of known visual scene-responsive regions to a larger scheme of brain organization and processing of relations in larger unseen environments. These findings may provide a basis for understanding how the human brain processes and integrates the navigated environment across scales. Furthermore, our findings suggest a way by which brain systems responsible for representation of large-scale environments may be used to flexibly represent information in other abstract cognitive domains. Further investigations into how the brain integrates environments and relations in large scales may inform us on general processing mechanisms in the brain, and how relations in other abstract conceptual domains are encoded by spatially-based processes.

Materials and methods

Subjects

Nineteen healthy subjects (twelve males, mean age 27.7 ± 4.4 y) participated in the study. All subjects provided written informed consent, and the study was approved by the ethical committee of the Hadassah Hebrew University Medical Center.

Experimental stimuli

Request a detailed protocol

Six spatial scales were investigated: room, building, neighborhood, city, country and continent. These scales reflect ecological categories, which grow in size in a logarithmic manner (Figure 1—figure supplement 1). To gather stimuli for this large range of spatial scales, subjects were asked to provide names of two real-world locations personally familiar to them at each scale, several days before the experiment (e.g. home bedroom, Hadassah hospital, London, Argentina). In each of these twelve locations, subjects indicated the names of eight items whose location they personally know: objects at the room and building scale (e.g. bed, vending machine), and landmarks at the neighborhood, city, country and continent scales (e.g. Supermarket, Eiffel tower). Subjects were asked to keep the item names short and make sure they represent a unique location. Subjects who failed to provide enough personally-familiar stimuli (due to lack of sufficient travel experience abroad) were not included in the experiment.

Experimental paradigm

Request a detailed protocol

During the experiment, subjects were presented with a target stimulus consisting of one of the items they had provided and its respective location (e.g. ‘table’ in ‘living room’, ‘city hall’ in ‘Jerusalem’), followed by a pair of other stimuli from the same location on the left and right of the screen (Figure 1). Subjects were instructed to indicate which of the two stimuli is closer to the target stimulus by pressing the left or right buttons.

Stimuli were presented in a randomized block design. Each block started by presentation of a target stimulus for 2.5 s, followed by consecutive presentation of four stimuli pairs, each for 2.5 s (Figure 1). All stimuli within the same block had to be judged in relation to the block’s target stimulus location. Each block (12.5 s) was followed by 7.5 s of fixation. Subjects were instructed to respond accurately but as fast as possible. The experiment consisted of either four or five experimental runs for each subject, each run containing 24 blocks in a randomized order (two blocks for each of the twelve locations = four blocks in each spatial scale). In total, subjects performed 24 blocks per run, each including four object pairs, for a total of 384–480 comparisons over the experiment. Anchor items and stimuli pairs were chosen independently and randomly from the eight items the subject provided for each location, allowing for repetitions; on average, 3.5% of stimuli pairs were repeated during the experiment (with the same anchor stimulus), and each item was used 9 ± 2.87 times as a target. In addition, eleven subjects performed a lexical control task in a separate run, in which they viewed similar target stimuli followed by stimuli pairs but were instructed to indicate which of the pair of words is closer in length to the target stimulus. A training task using pairs of stimuli derived from the same pool was delivered before the experiment; subjects performed the training until they indicated that they felt comfortable doing the task (average number of training trials per subject = 53 ± 26.6, or 8.8 trials per spatial scale). Stimuli were presented using the Presentation software (Version 18.3, Neurobehavioral Systems, Inc, Berkeley, CA, www.neurobs.com, RRID:SCR_002521). After the experiment, subjects rated their level of familiarity with each of the twelve locations, the emotional significance of the location, and level of difficulty of judgments at each location (from 1 to 7). They were also asked to describe the strategy used for determining responses in each of the six spatial scales (free descriptions) and specifically to what extent did they adopt a ground-level or bird’s-eye point-of-view (1 to 7 rating).

Analysis of spatial scale sizes

Request a detailed protocol

For each stimulus provided by each participant, we identified the latitude and longitude of the stimulus location, if it was a name which could be identified. 72% of stimuli locations were identified (65% for neighborhoods, 83% for cities, 72% for countries and 70% for continents). The pairwise distances between all items in each location and scale were calculated using the Haversine formula (to account for the earth’s globular shape), using a script provided by M Sohrabinia: https://www.mathworks.com/matlabcentral/fileexchange/38812-latlon-distance. A linear fit to the resulting logarithmic values shows a fit of r2 = 0.98, indicating that scale transitions reflect a logarithmic increase in environmental size.

MRI acquisition

Request a detailed protocol

Subjects were scanned in a 3T Siemens Skyra MRI (Siemens, Erlangen, Germany) at the Edmund and Lily Safra Center (ELSC) neuroimaging unit. Blood oxygenation level-dependent (BOLD) contrast was obtained with an echo-planar imaging sequence [repetition time (TR), 2,500 ms; echo time (TE), 30 ms; flip angle, 75°; field of view, 192 mm; matrix size, 64 × 64; functional voxel size, 3 × 3 × 3 mm; 46 slices, descending acquisition order, no gap; 200 TRs per run]. In addition, T1-weighted high resolution (1 × 1 × 1 mm, 160 slices) anatomical images were acquired for each subject using the MPRAGE protocol [TR, 2,300 ms; TE, 2.98 ms; flip angle, 9°; field of view, 256 mm].

MRI processing

Request a detailed protocol

fMRI data were processed and analyzed using the BrainVoyager 20.6 software package (R. Goebel, Brain Innovation, Maastricht, The Netherlands, RRID:SCR_013057), Neuroelf v1.1 (www.neuroelf.net, RRID:SCR_014147), and in-house Matlab (Mathworks, version 2018a, RRID:SCR_001622) scripts. Preprocessing of functional scans included slice timing correction (cubic spline interpolation), 3D motion correction by realignment to the first run image (trilinear detection and sinc interpolation), high-pass filtering (up to two cycles), smoothing (full width at half maximum (FWHM) = 4 mm), exclusion of voxels below intensity values of 100, and co-registration to the anatomical T1 images. Anatomical brain images were corrected for signal inhomogeneity and skull-stripped. All images were subsequently normalized to Montreal Neurological Institute (MNI) space (3 × 3×3 mm functional resolution, trilinear interpolation). The full analysis and preprocessing scripts are available at https://github.com/CompuNeuroPsychiatryLabEinKerem/publications_data/tree/master/spatial_scales (Peer et al., 2019, copy archived at https://github.com/elifesciences-publications/publications_data).

Functional MRI analysis

Estimation of cortical responses to each spatial scale 

Request a detailed protocol

A general linear model (GLM) analysis (Friston et al., 1994) was applied at each voxel, where predictors corresponded to the six spatial scales. Each modeled predictor included all experimental blocks at one spatial scale, where each block was modeled as a boxcar function encompassing the target stimulus and the four distance comparisons following it. Predictors were convolved with a canonical hemodynamic response function, and the model was fitted to the BOLD time-course at each voxel. Motion parameters were added to the GLM to eliminate motion-related noise. In addition, white matter and CSF masks were manually extracted in BrainVoyager for each subject (intensity >150 for the white-matter mask and intensity <10 with a bounding box around the lateral ventricles for CSF), and the average signals from these masks were added to the GLM to eliminate potential noise sources. Data were corrected for serial correlations using the AR(2) model and transformed to units of percent signal change. Subsequently, a random-effects analysis was performed across all subjects to obtain group-level beta values for each predictor.

Identification of voxels with spatial scale sensitive activity

Request a detailed protocol

To identify voxels with differences in brain activity between spatial scales, single-factor repeated-measures ANOVA was applied in each voxel on the scale-specific predictors’ beta values, across all subjects (FDR-corrected for multiple comparisons across voxels, p<0.01). Following voxel identification, beta values were averaged for each voxel across subjects, and two methods were used to identify selectivity to spatial scales: (1) fitting a Gaussian function to the betas’ graph and identifying its peak; (2) selection of the scale with maximal activity (Figure 2—figure supplement 1). Since the responses in almost all regions follow a gradual pattern of change between different scales, the Gaussian fit enables a fuller consideration of the overall pattern of activity and scale selectivity across scales, instead of focusing only on the maximally active scale. Gaussian fitting was performed for each beta vector after its normalization by subtracting its minimum value, and fitting was performed using Matlab, with bounds of 0 to 100 for amplitude, −100 to 100 for center, and 0 to 100 for width. Only voxels with fit of r2 >0.7 (5737 out of 7452 voxels) were included in the subsequent analyses of Gaussian fit peaks.

Group-level analysis of activity profiles across spatial scales

Request a detailed protocol

Event-related activity (ERA) averaging and beta averaging across subjects were used to investigate activity profiles at each region. For event-related activity, BOLD signals were averaged for all blocks containing each scale across all runs and subjects, for the ten functional volumes following each block’s initial display of the target stimulus. Beta plots were also created by averaging the beta values calculated in the random-effects GLM analysis across all subjects. These procedures were performed in each region of interest, as defined by the peak of the Gaussian fit to the group-averaged beta maps.

Measuring increase of scale selectivity along gradients and along the hippocampal long axis

Request a detailed protocol

Within the hippocampus and the three identified gradients (medial temporal, medial parietal and lateral occipito-parietal), peak of Gaussian fit was averaged for each MNI coordinate along the Y axis, as well as scale with maximal response, resulting in vectors of scale selectivity across the posterior-anterior axis. To measure whether there is a gradual increase in preferred scale along each gradient, we modeled each gradient using a linear function that was fitted to the scale preference values along it, and also fitted this function to 1000 shuffled versions of each scale preference vector for obtaining a null distribution. The slope of the actual fit was tested against the slope of the fits to the random permutations to check if the obtained increase in scale preference along the gradients significantly deviates from chance. Resulting p-values were corrected for multiple comparisons across gradients using the false discovery rate (Benjamini and Hochberg, 1995). This analysis was additionally repeated at the individual subject level, by fitting the Gaussian function at the individual subject level and calculating the linear fit and its significance along the cortical gradients (regions defined by the group results) and along the hippocampus.

Comparison to hippocampus and visual scene-responsive regions (RSC, PPA and OPA)

Request a detailed protocol

Masks of the RSC, PPA and OPA were used, as established in a previous publication (Julian et al., 2012); http://web.mit.edu/bcs/nklab/GSS.shtml). These masks represent group activation clusters from 30 subjects who watched visual images with a contrast of scenes > objects. The outlines of the group-level clusters were overlaid on each cortical gradient (Figure 4) to compare their cortical locations. In addition, a region-of-interest GLM analysis (random effects group analysis) was performed within each mask, to obtain beta values for each spatial scale at each region. The hippocampal region-of-interest was extracted from the Harvard-Oxford atlas brain template distributed with FSL (http://www.fmrib.ox.ac.uk/fsl/, RRID:SCR_001476Desikan et al., 2006Jenkinson et al., 2012).

Comparison of scale-specific activations to large-scale resting-state networks

Request a detailed protocol

A previously published whole-brain parcellation into seven large-scale brain networks was used as a template for resting-state networks location. For each scale-selective region within the three gradients, its percent of overlap with each of the seven resting-state networks was measured (percent of voxels from Desikan et al., 2006 this region within each network).

Analyses of potential factors contributing to the scale effect

Request a detailed protocol

Each subject’s ratings of difficulty, emotional significance and familiarity for each location were independently normalized by z-transform. Ratings of first-person perspective taking, third-person perspective taking, and mentions of use of different strategies were similarly transformed for each scale. The resulting values were then used as parametrically modulation regressors (after convolution with the hemodynamic response function), according to each experimental block’s spatial scale and specific location. A response time predictor was added in a similar manner according to each trial’s response time. Random-effects group analysis (corrected for serial correlations, AR(2)) was then performed using each of the regressors separately, to identify activity modulation by each potential contributing factor. In addition, one-way ANOVA (Tukey-Kramer post-hoc test, p<0.01) was used to identify significant differences in the ratings between the six spatial scales.

Comparison of activity to the lexical control task

Request a detailed protocol

Regressors for the lexical control were added to the scale predictors in the GLM analysis, and a new design matrix was computed for each subject. A group analysis (corrected for serial correlations, AR(2)) was performed in each scale-sensitive region of interest, and activity in this region’s preferred scale was contrasted with the activity corresponding to the respective control condition.

Data sharing

Request a detailed protocol

All of the analysis codes from this project, as well as the resulting statistical maps and spatial scale-specific regions, are freely available at https://github.com/CompuNeuroPsychiatryLabEinKerem/publications_data/tree/master/spatial_scales (Peer et al., 2019, copy archived at https://github.com/elifesciences-publications/publications_data).

References

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
    Neural systems for landmark-based wayfinding in humans
    1. RA Epstein
    2. LK Vass
    (2014)
    Philosophical Transactions of the Royal Society B: Biological Sciences 369:20120533.
    https://doi.org/10.1098/rstb.2012.0533
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
    Learning, Reasoning, and Talking About Space
    1. T Meilinger
    (2008)
    344–360, The network of reference frames theory: A synthesis of graphs and cognitive mapsSpatial Cognition VI, Learning, Reasoning, and Talking About Space, Springer, 10.1007/978-3-540-87601-4_25.
  59. 59
  60. 60
  61. 61
    Scale and Multiple Psychologies of Space
    1. DR Montello
    (1993)
    312–321, European Conference on Spatial Information Theory, Scale and Multiple Psychologies of Space, Springer, 10.1007/3-540-57207-4_21.
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
  86. 86
  87. 87
    Modelling Navigational Knowledge by Route Graphs
    1. S Werner
    2. B Krieg-Brückner
    3. T Herrmann
    (2000)
    295–316, Spatial Cognition II, Modelling Navigational Knowledge by Route Graphs, Springer, 10.1007/3-540-45460-8_22.
  88. 88
  89. 89
  90. 90
  91. 91
  92. 92

Decision letter

  1. Timothy E Behrens
    Senior Editor; University of Oxford, United Kingdom
  2. Muireann Irish
    Reviewing Editor; University of Sydney, Australia
  3. Dori Derdikman
    Reviewer; Technion - Israel Institute of Technology, Israel
  4. Buddhika Bellana
    Reviewer; John Hopkins University, United States

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Processing of different spatial scales in the human brain" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Timothy Behrens as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Dori Derdikman (Reviewer #2); Buddhika Bellana (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

This study explores the representation of scale-dependent spatial information ranging from small to large geographical spaces and from concrete to abstract features. Using a novel fMRI task, human participants were asked to perform judgments of spatial distance (i.e., which of the two following items are closer to a cued item), where the spatial scale of the distance judgment is manipulated within participants. Scale-dependent activity was evident in three posterior-anterior gradients (medial-temporal cortex, medial parietal cortex and lateral parieto-occipital cortex). Within each of these regions, a significant linear fit was observed, such that more anterior voxels were preferentially recruited during judgments of larger spatial scales (e.g., city, country, continent), while posterior voxels were associated with judgments at more local scales (e.g., room, building). The findings reported here complement and extend previous work in both rodents and humans, which have mostly focused on smaller-scale environments and scenes.

Essential revisions:

Task Design:

1) The task description requires further detail to understand exactly what was required of participants and, in turn, how to interpret the results. It would be useful to share the specific decisions each participant had to make for each level of spatial scale, or at the very least, examples from each level of spatial scale. An example at the room level is given, but similar in-text samples of at least one block per spatial scale is necessary for the reader to have a better handle on the actual decisions the participants made. Consider adding a short description of the task before the results, and ideally a Methods figure in the main text accompanied by a set of example cues and trials from each of the 6 spatial scales.

2) While the increasing levels of spatial scale make intuitive sense, the cut-off between each category seems rather arbitrarily defined. For example, it was not clear why the spatial scales progressed from house to neighbourhood when an interim level of street could be made. Similarly, the leap from city to country further seemed to neglect intermediate spatial scales such as region/county etc. It would be helpful for the authors to acknowledge or justify the use of these seemingly arbitrary cut-off points between these spatial scales and to perhaps consider how a more granular classification system might result in different findings.

3) The specific composition of object pairs in the distance judgments task is not clearly presented. From my understanding, participants provided 2 locations per level of spatial scale (total = 12 locations), and then produced 8 relevant items each. Then the task itself had 4-5 runs that contained 4 blocks per spatial scale (total = 16 blocks). Each block then contained 4 stimulus pairs and a target/cue, anchoring the participants' decisions. Therefore, there should be at least 4*4*4 (# of runs * # of blocks * # of pairs) pairs of items for each spatial scale across the experiment.

4) Were all pairs unique for each spatial scale and subject, or were there repetitions?

5) Were all items drawn exclusively from combinations of the 12 items produced by the corresponding subject?

6) How was the cued item (e.g., "the bed") selected per block?

7) Were all 12 items produced by each subject used as targets? In pairs? Were they all presented an equal number of times?

8) Were these 12 same items used in the 2-5 minutes training task?

9) On how many runs of each spatial scale were the participants trained? Did each training run have the same parameters as the experimental run (e.g., number of trials/object pairs)?

10) Were the post-task difficulty judgments done for each specific object pairing, or was a rating produced per individual object? E.g.1-7 difficulty rating for "Table - Window"? 1-7 difficult rating for "Table"?

Neuroimaging data:

11) What precisely was the modelled predictor? The specific onset of each object pair presentation from each block, per spatial scale, drawn across runs (e.g., stick function)? Or was it a boxcar function that modelled activity for the duration of each block? Please clarify.

12) Was there any additional nuisance regression that was conducted (e.g., white matter, or cerebrospinal fluid)? If not, why?

13) In terms of analysis, were trends other than linear ever tested? Did the linear fit have the most explanatory power relative to other potential trends (e.g., quadratic, cubic …)?

14) The behavioral data (Figure 1—figure supplement 2 and Supplementary file 3) should be used to segment the fMRI data, as at least in some cases it could provide an additional explanation to some of the gradients. Specifically:a) When grouping data according to the categories in Supplementary file 3, what does the fMRI signal look like? (e.g. imagining a map-like view vs. triangulation).b) How does the fMRI signal segment when weighing according to Familiarity rating, or Perspective-taking rating? (Figure 1—figure supplement 2)

Interpretation:

15) A fundamental question relates to the nature of smaller versus larger scale environments. For example, a room is far less dynamic than a city and does not necessarily require any movement or navigation within the spatial array to correctly adjudicate between the two distances. How can we determine whether scale preference is the critical factor versus the type of experience that one has at these different spatial scales? Participants were not asked whether the judgments within smaller scale environments invoked retrieval of specific prior experiences (i.e., episodic memory). Medial temporal and medial parietal activation for small-to-large environments may not necessarily reflect the spatial scale but the harnessing of personally relevant episodic experiences during the task. Please discuss.

16) The authors note the recruitment of the DMN for larger spatial scales and suggest the potential relationship with representations of abstract domains more generally. It may be interesting to examine this directly using a tool like Neurosynth (http://neurosynth.org). For example, one could examine the% voxel overlap between whole brain maps at each spatial scale and meta-analytic maps of "abstract" and "concrete" obtained via Neurosynth.

17) In Figure 1 (E, bottom right panel), the hippocampal long axis does not show a clear scale-dependent activity gradient, contrary to the claim in the Discussion section, and to previous works (subsection “Three posterior-anterior gradients of spatial scale selectivity”). Activity spans three scales (neighborhood->country) but seems to be involved mainly with 'city' scale. Furthermore, surprisingly it is not selective at all in 'room'/'building' scale. The above have been well established in both VR (humans/rodents) and freely moving rodents, which raises a question of how well the paradigm mimics actual coding of space. This discrepancy should be discussed.

https://doi.org/10.7554/eLife.47492.022

Author response

Task Design:

1) The task description requires further detail to understand exactly what was required of participants and, in turn, how to interpret the results. It would be useful to share the specific decisions each participant had to make for each level of spatial scale, or at the very least, examples from each level of spatial scale. An example at the room level is given, but similar in-text samples of at least one block per spatial scale is necessary for the reader to have a better handle on the actual decisions the participants made. Consider adding a short description of the task before the results, and ideally a Methods figure in the main text accompanied by a set of example cues and trials from each of the 6 spatial scales.

As suggested, we have added this to the main text as the new Figure 1, accompanied by example stimuli from one location in each spatial scale. We describe the task and stimuli in this figure’s legend and more extensively in the Materials and methods section, and refer to the figure in the Introduction.

2) While the increasing levels of spatial scale make intuitive sense, the cut-off between each category seems rather arbitrarily defined. For example, it was not clear why the spatial scales progressed from house to neighbourhood when an interim level of street could be made. Similarly, the leap from city to country further seemed to neglect intermediate spatial scales such as region/county etc. It would be helpful for the authors to acknowledge or justify the use of these seemingly arbitrary cut-off points between these spatial scales and to perhaps consider how a more granular classification system might result in different findings.

The main theme of the manuscript is, as stated by the reviewers, the continuous shift in processing along cortical gradients with increasing spatial scale, regardless of granularity. Our original reasoning for the design was that for balancing reasons, we wanted to have two levels in the small scale (room, building), two in the middle (neighborhood, city), and two in the large spatial scale (country, continent). We further attempted to use ecologically valid scales that subjects naturally refer to, and therefore did not include counties (as our subjects live in Israel, which is not divided into counties due to its relatively small size).

To quantitatively investigate the cut-offs between scales and justify them, we have now explicitly attempted to measure the size of each scale across participants. We did this by identifying the latitude and longitude coordinates for each of the provided stimuli, for scales where these referred to identifiable locations (the neighborhood, city, country and continent scales). We managed to identify 72% of the 1824 locations provided by the subjects and measured the distances between all of the locations within each scale. On average, the distances were 350m between locations in neighborhoods, 2.8km between locations in cities, 233km between locations in countries and 1,140km between locations in continents. Assuming that the average distance between elements in a room is ~1m and between elements in a building ~10m, the scales in our design present a relatively stable logarithmic increase (r2 of a linear fit to the logarithmic values = 0.98).

We have now added this analysis to the Materials and methods section, and added the figure as Figure 1—figure supplement 1. We further write in subsection “Experimental stimuli”: “These scales reflect ecological categories, which grow in size in a logarithmic manner (Figure 1—figure supplement 1)”. We also write in the manuscript Discussion section: “to obtain a large range of spatial scales and maintain ecological validity we used a personalized paradigm where subjects provided names of real-world locations familiar to them, in six naturalistic scales, therefore not controlling for the precise size and distances in each scale. Despite this restriction, the distances between subjects’ selected stimuli logarithmically increased with each scale, and a bilateral gradient organization was consistently observed across gradients. However, the exact relations between distances and scales may be further investigated in a more granular manner using studies of well-controlled (e.g. virtual) environments with different scales”.

3) The specific composition of object pairs in the distance judgments task is not clearly presented. From my understanding, participants provided 2 locations per level of spatial scale (total = 12 locations), and then produced 8 relevant items each. Then the task itself had 4-5 runs that contained 4 blocks per spatial scale (total = 16 blocks). Each block then contained 4 stimulus pairs and a target/cue, anchoring the participants' decisions. Therefore, there should be at least 4*4*4 (# of runs * # of blocks * # of pairs) pairs of items for each spatial scale across the experiment.

We thank the reviewers for this clarification. There were indeed two locations for each of the six spatial scales (total = 12 locations), and eight items in each location. In each experimental run, there were two blocks for each of the twelve locations (total = 24 blocks per run), and each block included four different stimulus pairs. Therefore, there were 4-5 runs * 24 blocks * 4 pairs = 384-480 pairs of comparisons for each participant. These details are now clarified in the Materials and methods section of the revised manuscript: “subjects were asked to provide names of two real-world locations personally familiar to them at each scale […] In total, subjects performed 24 blocks per run, each including four object pairs, for a total of 384-480 comparisons over the experiment.”

4) Were all pairs unique for each spatial scale and subject, or were there repetitions?

Pairs of items were chosen at random from each location, and repetitions across the experiment were allowed. We now calculated the number of repeated pairs across the experiment: on average, across subjects, only 3.5% of stimuli pairs were repeated across the experiment with the same anchor stimulus. This is now explicitly mentioned in the revised manuscript’s Materials and methods section: “Anchor items and stimuli pairs were chosen independently and randomly from the eight items the subject provided for each location, allowing for repetitions; on average, 3.5% of stimuli pairs were repeated during the experiment (with the same anchor stimulus).”

5) Were all items drawn exclusively from combinations of the 12 items produced by the corresponding subject?

All stimuli were drawn exclusively from combinations of the 8 items produced by the corresponding subject per spatial location (16 items per scale). This is now explicitly mentioned in the revised manuscript’s Materials and methods section: “Anchor items and stimuli pairs were chosen independently and randomly from the eight items the subject provided for each location”.

6) How was the cued item (e.g., "the bed") selected per block?

The cued item was selected randomly for each block, from the eight items provided by the subject for each location. This is now detailed in the revised manuscript’s Materials and methods section.

7) Were all 12 items produced by each subject used as targets? In pairs? Were they all presented an equal number of times?

As items were chosen at random, they did not have to be presented an equal number of times. Following the reviewer’s comment we have calculated and found that items were used as targets on average 9 ± 2.87 times for each subject. We now detail in the Materials and methods section: “on average, 3.5% of stimuli pairs were repeated during the experiment (with the same anchor stimulus), and each item was used 9 ± 2.87 times as a target.”

8) Were these 12 same items used in the 2-5 minutes training task?

The same items were used in the short training phase, as we now detail in the Materials and methods section. On average, there were 4.1 questions used in the training task that were repeated in the fMRI experiment (~1% of the experiment questions), and therefore we do not expect these to interfere with the test phase results.

9) On how many runs of each spatial scale were the participants trained? Did each training run have the same parameters as the experimental run (e.g., number of trials/object pairs)?

Subjects performed the training until they indicated that they felt comfortable doing the task. On average, they performed 53 ± 26.6 comparisons, or 8.8 comparisons per scale. We now detail this in the methods: “A training task using pairs of stimuli derived from the same pool was delivered before the experiment; subjects performed the training until they indicated that they felt comfortable doing the task (average number of training trials per subject = 53 ± 26.6, or 8.8 trials per spatial scale). “(Subsection “Experimental paradigm”).

10) Were the post-task difficulty judgments done for each specific object pairing, or was a rating produced per individual object? E.g. 1-7 difficulty rating for "Table - Window"? 1-7 difficult rating for "Table"?

The post-task ratings of familiarity, emotion and difficulty were done for each of the 12 provided locations, and the perspective taking ratings and strategy descriptions were done for each of the six scales. The behavioral ratings details are now described in the Materials and methods section: “After the experiment, subjects rated their level of familiarity with each of the twelve locations, the emotional significance of the location, and level of difficulty of judgments at each location (from 1 to 7). They were also asked to describe the strategy used for determining responses in each of the six spatial scales (free descriptions) and specifically to what extent did they adopt a ground-level or bird’s-eye point-of-view (1 to 7 rating)”.

Neuroimaging data:

11) What precisely was the modelled predictor? The specific onset of each object pair presentation from each block, per spatial scale, drawn across runs (e.g., stick function)? Or was it a boxcar function that modelled activity for the duration of each block? Please clarify.

We modelled the whole duration of each block as a continuous boxcar function. We now clarify in the revised manuscript that “Each modeled predictor included all experimental blocks at one spatial scale, where each block was modeled as a boxcar function encompassing the target stimulus and the four distance comparisons following it. Predictors were convolved with a canonical hemodynamic response function, and the model was fitted to the BOLD time-course at each voxel.” (subsection “Functional MRI analysis”).

12) Was there any additional nuisance regression that was conducted (e.g., white matter, or cerebrospinal fluid)? If not, why?

According to the reviewers’ suggestion, we have now re-done all of the analyses with addition of average white-matter and CSF signals as nuisance regressors. All of the results and statistically significant effects remain unchanged after adding these regressors. We updated all the figures accordingly and detail in the revised subsection “Functional MRI analysis” that: “white matter and CSF masks were manually extracted in BrainVoyager for each subject (intensity>150 for the white-matter mask and intensity<10 with a bounding box around the lateral ventricles for CSF), and the average signals from these masks were added to the GLM to eliminate potential noise sources.”

13) In terms of analysis, were trends other than linear ever tested? Did the linear fit have the most explanatory power relative to other potential trends (e.g., quadratic, cubic…)?

To detect gradual changes in between scales we used here a linear fit, applied on the scale preference graphs along each of the gradients (Figure 2E). Importantly, this fitting was not attempted to model the precise shape of the scale change across the gradients, but rather to quantitatively measure whether scale preference consistently increases or decreases along the posterior-anterior axis of each gradient. To this aim, we estimated the slope of the increase in preferred scale (using an approximate linear fit), and tested whether it is larger than what would be obtained by chance if there was no actual increase in scale preference (using permutations of the scale preference values). Across the four regions (medial parietal, medial temporal, lateral parietal and hippocampus) and two scale preference measures (maximally active scale and peak of Gaussian fit to the beta values), there was a significant slope deviation from random value permutations (p<0.01 for all regions, FDR-corrected), indicating an increase in spatial scale preference along each gradient. In this regard, using a quadratic or cubic fit cannot provide a measure of overall increase in preferred scale along each gradient, as these do not model a gradual directional change. We now clarify this in the subsection “Measuring increase of scale selectivity along gradients and along the hippocampal long axis”: “To measure whether there is a gradual increase in preferred scale along each gradient, we modeled each gradient using a linear function that was fitted to the scale preference values along it, and also fitted this function to 1000 shuffled versions of each scale preference vector for obtaining a null distribution. The slope of the actual fit was tested against the slope of the fits to the random permutations to check if the obtained increase in scale preference along the gradients significantly deviates from chance. Resulting p-values were corrected for multiple comparisons across gradients using the false discovery rate (Benjamini and Hochberg, 1995).”

14) The behavioral data (Figure 1—figure supplement 2 and Supplementary file 3) should be used to segment the fMRI data, as at least in some cases it could provide an additional explanation to some of the gradients. Specifically:a) When grouping data according to the categories in Supplementary file 3, what does the fMRI signal look like? (e.g. imagining a map-like view vs. triangulation).b) How does the fMRI signal segment when weighing according to Familiarity rating, or Perspective-taking rating? (Figure 1—figure supplement 2).

The reviewers mention that perspective-taking, strategy, emotion, difficulty and familiarity are important factors to consider, and may partially explain why making judgment at different scales activates different regions along the gradients we identified. To investigate these issues more in depth, we have now created parametrically modulated regressors according to each of these factors, with the values derived from our subjects’ ratings: degree of difficulty, emotional valence, location familiarity, first-person and third-person perspective imagination, and whether subjects reported or not using specific strategies in the different scales (imagining lines toward the objects, calculating walking/driving/flying times to each item, imagining themselves “looking around” within the scene, imagining a mental map). Note that for the verbal strategy descriptions we do not have quantifiable data for each subject and scale, as subjects were asked an open question on strategy use and we counted whether they mentioned using this strategy or not. We ran a random-effects GLM analysis for each of these factors without taking into account the spatial scale, to investigate their contribution to the observed effects. We now show the results of each of these GLMs in Figure 2—figure supplement 5.

The results of this analysis show that four factors explain significant variance in parts of the gradients: level of familiarity each location, use of a first-person perspective, use of a third-person perspective, and using a strategy of map imagination. This makes sense as these factors are correlated with the change in spatial scale (average correlation across subjects between linear scale increase and these four factors: r = -0.69, -0.81, 0.75, 0.77). For this reason, regions that show a linear increase or decrease in activity with increase in scale will also show the same effect with relative to these factors that are correlated with scale.

The new analyses and now described in the revised manuscript’s Materials and methods section. For clarity, we combined the subsection “subjects’ ratings and reports” and subsection “ruling out effects of possible confounds” into a new subsection “Subjects’ behavioral ratings and their relation to the scale effects”, in which we detail the specific correlation values between scales and behavioral measures. We further explain “To measure the effect of these different factors on the observed activations, we used parametric modulation using subjects’ ratings of emotion, familiarity, difficulty, perspective taking and strategy. The familiarity, perspective taking (first-person and third-person) and reports of use of a map strategy showed significant effects inside the scale-related gradients, in accordance with their high correlation to spatial scale (Figure 2—figure supplement 5). No other factor showed any significantly active regions in this analysis”. We also detail in the discussion section that “These scale-selective gradients were correlated with a shift from detailed to less-detailed knowledge of locations, and from first- to third-person perspective taking with increasing scale”.

We discuss extensively the contribution of these factors in the discussion, as detailed in the answer to the next comment.

Interpretation:

15) A fundamental question relates to the nature of smaller versus larger scale environments. For example, a room is far less dynamic than a city and does not necessarily require any movement or navigation within the spatial array to correctly adjudicate between the two distances. How can we determine whether scale preference is the critical factor versus the type of experience that one has at these different spatial scales? Participants were not asked whether the judgments within smaller scale environments invoked retrieval of specific prior experiences (i.e., episodic memory). Medial temporal and medial parietal activation for small-to-large environments may not necessarily reflect the spatial scale but the harnessing of personally relevant episodic experiences during the task. Please discuss.

There are indeed several factors that vary across scales in real life and may probably account, to a different degree, for the scale-related activity differences reported in our study. Further studies of several of these factors are now carried out in our lab in a fully controlled manner, investigating in detail the contribution of each factor. The current study did not aim at fully disentangling these factors, a task that requires several more dedicated studies, but rather to demonstrate the difference in cortical processing associated with spatial judgments at different scales. As suggested by the reviewers, we discuss these factors in the revised manuscript. These factors include:

Dynamics / movement in each scale – as mentioned, the smallest scale (room) might not require movement to judge spatial relations in it, in contrast to larger scales. We did not find any consistent differences in verbal reports of imagined movement between the scales, but this factor might still play a part in the observed differences.

Different degree of use of personally-relevant episodic memories – while these may indeed differ between scales, the specific study design requiring quick judgements in a very short time (2.5s per stimuli pair) did not leave room for such detailed elaborations. In addition, episodic-autobiographic memories engage prominently the DMN, which we found to be the most active for the largest spatial scales that subjects described as involving more third-person, map-like imagery and not first-person imagination. However, we cannot rule out the effect of this element.

Level of familiarity and first vs. third person perspective taking were shown to correlate with scale (and thus with our results; see comment 14). Previous studies directly manipulating environmental knowledge / familiarity (Epstein et al., 2007a, 2007b, Hirshorn et al., 2012b, Wolbers and Buchel 2005) and perspective taking (Rosenbaum et al., 2004, Sherrill et al., 2013) found that the posterior parts of our gradients (PPA, RSC and OPA) are more active for more well-known locations and first-person perspective, as we find here. However, these studies did not describe the opposite pattern in regions anterior to them for less well-known locations and third-person perspective taking, suggesting that these effects cannot fully account for the gradients we observe. One way to reconcile these issues is to suggest that posterior parts of the gradients contain representations that are highly detailed and related to actual perception of the spatial environment, in accordance with the relation of these regions to the visual system and their experience through a first-person perspective. As the scale increases, activity shifts to anterior DMN regions containing less-detailed representations, which are more schematic / abstract and therefore support a third-person, or map-like, imagination of the environment.

We have now added a paragraph to the Discussion section to discuss these issues: “Several factors might explain the shift in cortical activity when subjects make judgments at different scales. One element that may differ between scales is the amount of movement involved in their navigation and initial learning, although we did not find consistent differences between reports of imagined movement at different scales. […] These findings might be explained by the idea that posterior gradient regions contain detailed spatial information, supported by the visual system and acquired using a first-person perspective; as the scale increases, knowledge becomes less detailed and more abstract and schematic, supporting a bird’s-eye / map-like imagination (Arzy and Schacter, 2019).” (Discussion section).

16) The authors note the recruitment of the DMN for larger spatial scales and suggest the potential relationship with representations of abstract domains more generally. It may be interesting to examine this directly using a tool like Neurosynth (http://neurosynth.org). For example, one could examine the% voxel overlap between whole brain maps at each spatial scale and meta-analytic maps of "abstract" and "concrete" obtained via Neurosynth.

We thank the reviewers for this suggestion, which could potentially significantly contribute to the current paper as well as testing further our hypotheses. We therefore enthusiastically aimed to perform the analysis suggested above. However, regardless of the different scales, the Neurosynth maps for “abstract” and “concrete” were almost completely identical (see Author response image 1), making this database problematic to use:

This similarity seems to be related to the Neurosynth algorithm, which takes all of the activations reported in a study and associates them with the study’s keywords. We have looked at full text version of the most highly weighted individual manuscripts included in Neurosynth for the terms “concrete” and “abstract”, it seems that many of them compare concrete and abstract processing, and therefore include the two keywords and their activations are associated with both. For this reason, calculation of the overlap between scales and neurosynth maps does not yield informative or clear results (see Author response table 1 below). We believe that due to this methodological issue the analysis may be misleading, and therefore prefer not to include it in the revised paper.

Author response table 1
Neurosynth results – overlap of the “concrete” and “abstract” maps with each scale-specific gradient part.
https://doi.org/10.7554/eLife.47492.019
RoomBuildingNeighborhoodCityCountryContinent
Parahippocampal gradient
Concrete5.9%20%9.9%5.2%0%0%
Abstract10%16.8%1.8%0%1.8%0%
Retrosplenial gradient
Concrete0%0%18%31.6%27.5%15.3%
Abstract0%0%2%4.4%19%16.1%
Occipito-parietal gradient
Concrete4.8%16.9%0%0%27.9%23.6%
Abstract16.2%13.7%0%0%23.3%39.7%

17) In Figure 1 (E, bottom right panel), the hippocampal long axis does not show a clear scale-dependent activity gradient, contrary to the claim in the Discussion section, and to previous works (subsection “Three posterior-anterior gradients of spatial scale selectivity”). Activity spans three scales (neighborhood->country) but seems to be involved mainly with 'city' scale. Furthermore, surprisingly it is not selective at all in 'room'/'building' scale. The above have been well established in both VR (humans/rodents) and freely moving rodents, which raises a question of how well the paradigm mimics actual coding of space. This discrepancy should be discussed.

We thank the reviewers for this question. We have performed this analysis anew, calculating the fit in each subject separately and this time using all of the hippocampal voxels, after the addition of CSF and WM regressors to the GLM as suggested in comment 12. We found that there is a significant shift in activity preference from posterior to anterior hippocampus, both in the peak of Gaussian fit to the voxels, and in the maximally active scale, as indicated by a higher than chance slope of a linear fit to the average scale graph (p=0.004, p=0.001, respectively). We updated Figure 2E (previously Figure 1E) and the Materials and methods section accordingly.

Regarding the issue of hippocampal sensitivity to different scales, this analysis reveals that the hippocampus is most active for judgments at the neighborhood, city and country scales. However, this does not mean that it is not involved in the processing of other conditions, such as room and building; our analysis of scale preference indicates the preferred scale for each voxel / region, and not whether other scales also activate the region. With regard to the hippocampus, its activity across all conditions is negative relative to the baseline, so it is difficult to say if it is significantly active for the smaller scales (despite the significant differences between scales). Negative hippocampal BOLD signal is a common finding in the literature, and has been attributed in the past to it being part of the default-mode network, and thus constitutively active during rest (Ekstrom 2010, Stark and Squire 2001, Shipman and Astur 2008). To reiterate, because of the negative BOLD we can only infer from our data that parts of the hippocampus are active for specific scales more than for others, and not whether they are active for each condition specifically. We now discuss this point in the revised manuscript as follows: “Activity in the hippocampus, and in some of the anterior parts of the cortical gradients, was negative relative to baseline, while showing consistent differences in activity between scales. The anterior parts of the three cortical gradients overlap with the DMN, which may be characterized by negative BOLD during tasks (Raichle et al., 2001), and negative BOLD in the hippocampus is also a common finding (Shipman and Astur, 2008). These negative activations were interpreted in the past as potentially reflecting high constitutive activity of these regions during rest more than during active tasks (Ekstrom, 2010; Shipman and Astur, 2008; Stark and Squire, 2001). The fact that these activations are below baseline preclude inference of whether these regions participate in processing of smaller spatial scales or are only active for larger ones.” (Discussion section).

https://doi.org/10.7554/eLife.47492.023

Article and author information

Author details

  1. Michael Peer

    1. Department of Medical Neurosciences, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
    2. Department of Neurology, Hadassah Hebrew University Medical School, Jerusalem, Israel
    3. Department of Psychology, University of Pennsylvania, Philadelphia, United States
    Contribution
    Conceptualization, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing—original draft, Writing—review and editing
    For correspondence
    michael.peer@mail.huji.ac.il
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8373-8558
  2. Yorai Ron

    1. Department of Medical Neurosciences, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
    2. Department of Neurology, Hadassah Hebrew University Medical School, Jerusalem, Israel
    Contribution
    Software, Formal analysis, Validation, Investigation, Methodology, Writing—original draft
    Competing interests
    No competing interests declared
  3. Rotem Monsa

    1. Department of Medical Neurosciences, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
    2. Department of Neurology, Hadassah Hebrew University Medical School, Jerusalem, Israel
    Contribution
    Software, Formal analysis, Validation, Investigation, Methodology, Writing—original draft
    Competing interests
    No competing interests declared
  4. Shahar Arzy

    1. Department of Medical Neurosciences, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
    2. Department of Neurology, Hadassah Hebrew University Medical School, Jerusalem, Israel
    Contribution
    Conceptualization, Software, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Methodology, Writing—original draft, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6500-8095

Funding

Israel Science Foundation (1306/18 and 3213/19)

  • Shahar Arzy

Fulbright Association

  • Michael Peer

Eva, Luis and Sergio Lamas Scholarship Fund

  • Michael Peer

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

This work was supported by the Israeli Science Foundation (Grant No. 1306/18 and 3213/19). MP is supported by a Fulbright postdoctoral fellowship from the United States–Israel Educational Foundation, and by the Eva, Luis and Sergio Lamas Scholarship Fund. We wish to thank our study participants, Assaf Yohalashet, Yuval Porat, Lee Ashkenazi and Leon Deouell from the ELSC neuroimaging unit for their help in MRI scanning, Noam Saadon-Grosman for help with the analyses, and Gregory Peters-Founshtein and Rachel Fried for helpful comments.

Ethics

Human subjects: All subjects provided written informed consent, and the study was approved by the ethical committee of the Hadassah Hebrew University Medical Center (protocol 0657-15-HMO).

Senior Editor

  1. Timothy E Behrens, University of Oxford, United Kingdom

Reviewing Editor

  1. Muireann Irish, University of Sydney, Australia

Reviewers

  1. Dori Derdikman, Technion - Israel Institute of Technology, Israel
  2. Buddhika Bellana, John Hopkins University, United States

Publication history

  1. Received: April 8, 2019
  2. Accepted: August 5, 2019
  3. Version of Record published: September 10, 2019 (version 1)

Copyright

© 2019, Peer et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 819
    Page views
  • 103
    Downloads
  • 1
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

  1. Further reading

Further reading

    1. Developmental Biology
    2. Neuroscience
    Luis Sánchez-Guardado, Carlos Lois
    Short Report Updated
    1. Neuroscience
    William Heffley, Court Hull
    Research Article