Motion Processing: How the brain stays in sync with the real world

The brain can predict the location of a moving object to compensate for the delays caused by the processing of neural signals.
  1. Damian Koevoet
  2. Andre Sahakian
  3. Samson Chota  Is a corresponding author
  1. Experimental Psychology, Helmholtz Institute, Utrecht University, Netherlands

In professional baseball the batter has to hit a ball that can be travelling as fast as 170 kilometers per hour. Part of the challenge is that the batter only has access to outdated information: it takes the brain about 80–100 milliseconds to process visual information, during which time the baseball will have moved about 4.5 meters closer to the batter (Allison et al., 1994; Thorpe et al., 1996). This should make it virtually impossible to consistently hit the baseball, but the batters in Major League Baseball manage to do so about 90% of the time. How is this possible?

Fortunately, baseballs and other objects in our world are governed by the laws of physics, so it is usually possible to predict their trajectories. It has been proposed that the brain can work out where a moving object is in almost real time by exploiting this predictability to compensate for the delays caused by processing (Hogendoorn and Burkitt, 2019; Kiebel et al., 2008; Nijhawan, 1994). However, it has not been clear how the brain might be able to do this.

Since predictions must be made within a matter of milliseconds, highly time-sensitive methods are needed to study this process. Previous experiments were unsuccessful in determining the exact timing of brain activity (Wang et al., 2014). Now, in eLife, Philippa Anne Johnson and colleagues at the University of Melbourne and the University of Amsterdam report new insights into motion processing (Johnson et al., 2023).

Johnson et al. used a combination of electroencephalogram (EEG) recordings and pattern recognition algorithms to investigate how long it took participants to process the location of objects that either flashed in one place (static objects) or moved in a straight line (moving objects). Using machine learning techniques, Johnson et al. first identified how the brain represents a non-moving object (Grootswagers et al., 2017). They accurately mapped patterns of neural activity, which corresponded to the location of the static object during the experiment. Participants took about 80 milliseconds to process this information (Figure 1).

Motion processing in the human brain.

Johnson et al. compared how long it takes the brain to process visual information about static objects and moving objects. The static objects (top) did not move but were briefly shown in unpredictable locations on the screen: the delay between the appearance of the object and the representation of its location in the brain was about 80 milliseconds. However, when the object moved in a predictable manner (bottom), the delay was much smaller.

Strikingly, Johnson et al. discovered that the brain represented the moving object at location different to where one would expect it to be (i.e., not at the location from 80ms ago). Instead, the internal representation of the moving object was aligned to its actual current location so that the brain was able to track moving objects in real time. The visual system must therefore be able to correct the position by at least 80 milliseconds worth of movement, indicating that the brain can effectively compensate for temporal processing delays by predicting (or extrapolating) where a moving object will be located in the future.

To fully grasp how motion prediction processes compensate for the lag between the external world and the brain, it is important to know where in the visual system this compensatory mechanism occurs. Johnson et al. showed that the delay was already fully compensated for in the visual cortex, indicating that the compensation happens early during visual processing. There is evidence to suggest that some degree of motion prediction occurs in the retina, but Johnson et al. argue that this on its own is not enough to fully compensate for the delays caused by neural processing (Berry et al., 1999).

Another possibility is that a brain area involved in a later stage of motion perception, called the middle temporal area, may also play a role in predicting the location of an object (Maus et al., 2013). This region is thought to provide predictive feedback signals that help to compensate for the neural processing delay between the real world and the brain (Hogendoorn and Burkitt, 2019). More research is needed to test this theory, for example, by directly recording neurons in the middle temporal area in primates and rodents using intracranial electrodes. Gaining access to such accurate spatial and temporal neural information might be key to identifying where predictions are made and what they foresee exactly.

The work of Johnson et al. confirms that motion prediction of around 80–100 milliseconds can almost completely compensate for the lag between events in the real world and their internal representation in the brain. As such, humans are able to react to incredibly fast events – if they are predictable, like a baseball thrown at a batter. Neural delays need to be accounted for in all types of information processing within the brain, including the planning and execution of movements. A deeper understanding of such compensatory processes will ultimately help us to understand how the human brain can cope with a fast world, while the speed of its internal signaling is limited. The evidence here seems to suggest that we overcome these neural delays during motion perception by living in our brain’s prediction of the present.

References

Article and author information

Author details

  1. Damian Koevoet

    Damian Koevoet is at the Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands

    Contributed equally with
    Andre Sahakian
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9395-6524
  2. Andre Sahakian

    Andre Sahakian is at the Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands

    Contributed equally with
    Damian Koevoet
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0106-1182
  3. Samson Chota

    Samson Chota is at the Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands

    For correspondence
    s.chota@uu.nl
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5434-9724

Publication history

  1. Version of Record published: January 19, 2023 (version 1)

Copyright

© 2023, Koevoet, Sahakian et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 974
    Page views
  • 74
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Damian Koevoet
  2. Andre Sahakian
  3. Samson Chota
(2023)
Motion Processing: How the brain stays in sync with the real world
eLife 12:e85301.
https://doi.org/10.7554/eLife.85301

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    George A Spirou, Matthew G Kersting ... Paul B Manis
    Research Article

    Globular bushy cells (GBCs) of the cochlear nucleus play central roles in the temporal processing of sound. Despite investigation over many decades, fundamental questions remain about their dendrite structure, afferent innervation, and integration of synaptic inputs. Here, we use volume electron microscopy (EM) of the mouse cochlear nucleus to construct synaptic maps that precisely specify convergence ratios and synaptic weights for auditory- nerve innervation and accurate surface areas of all postsynaptic compartments. Detailed biophysically-based compartmental models can help develop hypotheses regarding how GBCs integrate inputs to yield their recorded responses to sound. We established a pipeline to export a precise reconstruction of auditory nerve axons and their endbulb terminals together with high-resolution dendrite, soma, and axon reconstructions into biophysically-detailed compartmental models that could be activated by a standard cochlear transduction model. With these constraints, the models predict auditory nerve input profiles whereby all endbulbs onto a GBC are subthreshold (coincidence detection mode), or one or two inputs are suprathreshold (mixed mode). The models also predict the relative importance of dendrite geometry, soma size, and axon initial segment length in setting action potential threshold and generating heterogeneity in sound-evoked responses, and thereby propose mechanisms by which GBCs may homeostatically adjust their excitability. Volume EM also reveals new dendritic structures and dendrites that lack innervation. This framework defines a pathway from subcellular morphology to synaptic connectivity, and facilitates investigation into the roles of specific cellular features in sound encoding. We also clarify the need for new experimental measurements to provide missing cellular parameters, and predict responses to sound for further in vivo studies, thereby serving as a template for investigation of other neuron classes.

    1. Genetics and Genomics
    2. Neuroscience
    Carolyn Elya, Danylo Lavrentovich ... Benjamin de Bivort
    Research Article Updated

    For at least two centuries, scientists have been enthralled by the “zombie” behaviors induced by mind-controlling parasites. Despite this interest, the mechanistic bases of these uncanny processes have remained mostly a mystery. Here, we leverage the Entomophthora muscae-Drosophila melanogaster “zombie fly” system to reveal the mechanistic underpinnings of summit disease, a manipulated behavior evoked by many fungal parasites. Using a high-throughput approach to measure summiting, we discovered that summiting behavior is characterized by a burst of locomotion and requires the host circadian and neurosecretory systems, specifically DN1p circadian neurons, pars intercerebralis to corpora allata projecting (PI-CA) neurons and corpora allata (CA), the latter being solely responsible for juvenile hormone (JH) synthesis and release. Using a machine learning classifier to identify summiting animals in real time, we observed that PI-CA neurons and CA appeared intact in summiting animals, despite invasion of adjacent regions of the “zombie fly” brain by E. muscae cells and extensive host tissue damage in the body cavity. The blood-brain barrier of flies late in their infection was significantly permeabilized, suggesting that factors in the hemolymph may have greater access to the central nervous system during summiting. Metabolomic analysis of hemolymph from summiting flies revealed differential abundance of several compounds compared to non-summiting flies. Transfusing the hemolymph of summiting flies into non-summiting recipients induced a burst of locomotion, demonstrating that factor(s) in the hemolymph likely cause summiting behavior. Altogether, our work reveals a neuro-mechanistic model for summiting wherein fungal cells perturb the fly’s hemolymph, activating a neurohormonal pathway linking clock neurons to juvenile hormone production in the CA, ultimately inducing locomotor activity in their host.