Spatial Navigation: A question of scale

An fMRI experiment reveals distinct brain regions that respond in a graded manner as humans process distance information across increasing spatial scales.
  • Download
  • Cite
  • CommentOpen annotations (there are currently 0 annotations on this page).
  1. Muireann Irish  Is a corresponding author
  2. Siddharth Ramanan
  1. University of Sydney, Australia
  2. Macquarie University, Australia

Just like our ancestors before us, humans must be able to navigate within both familiar and new environments, whether this involves driving to work or finding our way around a new city. Successful spatial navigation depends on many cognitive processes including memory, attention, and our perception of direction and distance (Epstein et al., 2017). A key issue, however, is that spatial environments vary considerably in terms of their size and complexity. To date most research on spatial navigation has focused on small spatial scales, such as navigating within a room or a building (Wolbers and Wiener, 2014). But it remains unclear how accurately we can estimate distances between locations on a larger scale, such as whether the Taj Mahal is closer to the Pyramids of Giza or the Great Wall of China, and how these different spatial scales are represented in the brain.

Now, in eLife, Michael Peer, Yorai Ron, Rotem Monsa and Shahar Arzy – who are based at the Hebrew University of Jerusalem, the Hadassah Medical Center and the University of Pennsylvania – report a simple but elegant experiment that teases apart which brain regions are recruited when we process information about environments that are on different spatial scales (Peer et al., 2019). Peer et al. asked internationally-travelled adults to provide the names of two locations they were personally familiar with across six spatial ‘scales’. These scales varied from small, spatially-confined areas (e.g. rooms and buildings) through medium-sized regions (e.g. local neighborhoods and cities) to expansive geographical locations (e.g. countries and continents; Figure 1A). The experiment was then personalized by asking each participant to provide the names of eight items that were personally familiar to them within each location.

How different spatial environments are represented in the human brain.

(A) In order to navigate successfully humans must be able to judge distances between objects on both small (e.g. rooms and buildings) and large (e.g. cities and countries) scales. (B) Peer et al. …

A few days later, participants underwent a functional magnetic resonance imaging experiment to determine which areas of the brain are selectively involved during spatial processing. This technique enables researchers to measure increases in blood flow and oxygen delivery to parts of the brain, and determine which regions are more ‘active’ when engaging in a cognitive task. During the experiment, participants were asked to judge distances between a ‘target’ item from their personal list (e.g. a table in their bedroom) and two other items from the same location (e.g. a chair or a bed in their bedroom). This allowed Peer et al. to investigate which brain regions respond to small, medium, and large spatial scales, and which regions are insensitive to scale but respond to other location or proximity information.

The experiment identified three main clusters of brain regions that are important for processing different spatial scales. What was unique about all three clusters was that activity within them shifted in a ‘graded’ manner depending on whether participants were processing spatial information on a local or more global scale. For example, when participants judged distances on a small scale in local environments, this engaged the posterior portions of all three clusters. On the other hand, when participants judged distances on a larger scale, the pattern of activity shifted towards the anterior portions of the clusters (Figure 1B).

These findings align remarkably well with previous work showing that the human hippocampus – a region of the brain involved in spatial navigation (Burgess et al., 2002) – represents object position and spatial information, such as direction and distance between objects, as a graded pattern of activity (Evensmoen et al., 2015; Evensmoen et al., 2013). The latest study, however, extends our understanding by highlighting how graded patterns of activity move from posterior to anterior regions of the spatial processing network outside of the hippocampus, depending on the spatial scale being processed (Figure 1).

The work presented here provides new insights into how humans navigate within different environments. From a clinical perspective, appreciating how humans dynamically zoom in or out of different spatial scales could help refine how various neurological conditions are diagnosed. This is most relevant for neurodegenerative disorders, such as Alzheimer’s disease, in which disorientation and a distorted sense of direction are often early symptoms (Coughlan et al., 2018; Tu et al., 2015). Whether the altered sense of direction and difficulties in judging proximity that are associated with Alzheimer’s disease are due to changes in the way that regions of the brain represent spatial scale is an important question for future studies to address.

References

Article and author information

Author details

  1. Muireann Irish

    Muireann Irish is in the Brain and Mind Centre and School of Psychology, University of Sydney, Sydney, Australia, and the ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, North Ryde, Australia

    For correspondence
    muireann.irish@sydney.edu.au
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4950-8169
  2. Siddharth Ramanan

    Siddharth Ramanan is in the Brain and Mind Centre and School of Psychology, University of Sydney, Sydney, Australia, and the ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, North Ryde, Australia

    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8591-042X

Publication history

  1. Version of Record published:

Copyright

© 2019, Irish and Ramanan

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,938
    views
  • 122
    downloads
  • 1
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

  1. Further reading

Further reading

    1. Neuroscience
    2. Physics of Living Systems
    Moritz Schloetter, Georg U Maret, Christoph J Kleineidam
    Research Article

    Neurons generate and propagate electrical pulses called action potentials which annihilate on arrival at the axon terminal. We measure the extracellular electric field generated by propagating and annihilating action potentials and find that on annihilation, action potentials expel a local discharge. The discharge at the axon terminal generates an inhomogeneous electric field that immediately influences target neurons and thus provokes ephaptic coupling. Our measurements are quantitatively verified by a powerful analytical model which reveals excitation and inhibition in target neurons, depending on position and morphology of the source-target arrangement. Our model is in full agreement with experimental findings on ephaptic coupling at the well-studied Basket cell-Purkinje cell synapse. It is able to predict ephaptic coupling for any other synaptic geometry as illustrated by a few examples.

    1. Neuroscience
    Sven Ohl, Martin Rolfs
    Research Article

    Detecting causal relations structures our perception of events in the world. Here, we determined for visual interactions whether generalized (i.e. feature-invariant) or specialized (i.e. feature-selective) visual routines underlie the perception of causality. To this end, we applied a visual adaptation protocol to assess the adaptability of specific features in classical launching events of simple geometric shapes. We asked observers to report whether they observed a launch or a pass in ambiguous test events (i.e. the overlap between two discs varied from trial to trial). After prolonged exposure to causal launch events (the adaptor) defined by a particular set of features (i.e. a particular motion direction, motion speed, or feature conjunction), observers were less likely to see causal launches in subsequent ambiguous test events than before adaptation. Crucially, adaptation was contingent on the causal impression in launches as demonstrated by a lack of adaptation in non-causal control events. We assessed whether this negative aftereffect transfers to test events with a new set of feature values that were not presented during adaptation. Processing in specialized (as opposed to generalized) visual routines predicts that the transfer of visual adaptation depends on the feature similarity of the adaptor and the test event. We show that the negative aftereffects do not transfer to unadapted launch directions but do transfer to launch events of different speeds. Finally, we used colored discs to assign distinct feature-based identities to the launching and the launched stimulus. We found that the adaptation transferred across colors if the test event had the same motion direction as the adaptor. In summary, visual adaptation allowed us to carve out a visual feature space underlying the perception of causality and revealed specialized visual routines that are tuned to a launch’s motion direction.