Hierarchical temporal prediction captures motion processing along the visual pathway

  1. Yosef Singer  Is a corresponding author
  2. Luke CL Taylor
  3. Ben DB Willmore
  4. Andrew J King  Is a corresponding author
  5. Nicol S Harper  Is a corresponding author
  1. University of Oxford, United Kingdom

Abstract

Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction - representing features that predict future sensory input from past input (Singer et al., 2018). Here we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input.

Data availability

All custom code used in this study was implemented in Python. The code for the models and analyses shown in Figures 1-8 and associated sections can be found at https://bitbucket.org/ox-ang/hierarchical_temporal_prediction/src/master/. The V1 neural response data (Ringach et al., 2002) used for comparison with the temporal prediction model in Figure 6 came from http://ringachlab.net/ ("Data & Code", "Orientation tuning in Macaque V1"). The V1 image response data used to test the models included in Figure 9 were downloaded with permission from https://github.com/sacadena/Cadena2019PlosCB (Cadena et al., 2019). The V1 movie response data used to test these models were collected in the Laboratory of Dario Ringach at UCLA and downloaded from https://crcns.org/data-sets/vc/pvc-1 (Nahaus and Ringach, 2007; Ringach and Nahaus, 2009). The code for the models and analyses shown in Figure 9 and the associated section can be found at https://github.com/webstorms/StackTP and https://github.com/webstorms/NeuralPred. The movies used for training the models in Figure 9 are available at https://figshare.com/articles/dataset/Natural_movies/24265498.

Article and author information

Author details

  1. Yosef Singer

    Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
    For correspondence
    yosef.singer@stcatz.ox.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4480-0574
  2. Luke CL Taylor

    Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  3. Ben DB Willmore

    Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-2969-7572
  4. Andrew J King

    Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
    For correspondence
    andrew.king@dpag.ox.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5180-7179
  5. Nicol S Harper

    Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
    For correspondence
    nicol.harper@dpag.ox.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7851-4840

Funding

Wellcome Trust (WT108369/Z/2015/Z)

  • Ben DB Willmore
  • Andrew J King
  • Nicol S Harper

University of Oxford Clarendon Fund

  • Yosef Singer
  • Luke CL Taylor

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2023, Singer et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 981
    views
  • 160
    downloads
  • 8
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Yosef Singer
  2. Luke CL Taylor
  3. Ben DB Willmore
  4. Andrew J King
  5. Nicol S Harper
(2023)
Hierarchical temporal prediction captures motion processing along the visual pathway
eLife 12:e52599.
https://doi.org/10.7554/eLife.52599

Share this article

https://doi.org/10.7554/eLife.52599

Further reading

    1. Neuroscience
    Toshiki Kobayashi, Daichi Nozaki
    Research Article

    The remarkable ability of the motor system to adapt to novel environments has traditionally been investigated using kinematically non-redundant tasks, such as planar reaching movements. This limitation prevents the study of how the motor system achieves adaptation by altering the movement patterns of our redundant body. To address this issue, we developed a redundant motor task in which participants reached for targets with the tip of a virtual stick held with both hands. Despite the redundancy of the task, participants consistently employed a stereotypical strategy of flexibly changing the tilt angle of the stick depending on the direction of tip movement. Thus, this baseline relationship between tip-movement direction and stick-tilt angle constrained both the physical and visual movement patterns of the redundant system. Our task allowed us to systematically investigate how the motor system implicitly changed both the tip-movement direction and the stick-tilt angle in response to imposed visual perturbations. Both types of perturbations, whether directly affecting the task (tip-movement direction) or not (stick-tilt angle around the tip), drove adaptation, and the patterns of implicit adaptation were guided by the baseline relationship. Consequently, tip-movement adaptation was associated with changes in stick-tilt angle, and intriguingly, even seemingly ignorable stick-tilt perturbations significantly influenced tip-movement adaptation, leading to tip-movement direction errors. These findings provide a new understanding that the baseline relationship plays a crucial role not only in how the motor system controls movement of the redundant system, but also in how it implicitly adapts to modify movement patterns.

    1. Neuroscience
    Friedrich Schuessler, Francesca Mastrogiuseppe ... Omri Barak
    Research Article

    The relation between neural activity and behaviorally relevant variables is at the heart of neuroscience research. When strong, this relation is termed a neural representation. There is increasing evidence, however, for partial dissociations between activity in an area and relevant external variables. While many explanations have been proposed, a theoretical framework for the relationship between external and internal variables is lacking. Here, we utilize recurrent neural networks (RNNs) to explore the question of when and how neural dynamics and the network’s output are related from a geometrical point of view. We find that training RNNs can lead to two dynamical regimes: dynamics can either be aligned with the directions that generate output variables, or oblique to them. We show that the choice of readout weight magnitude before training can serve as a control knob between the regimes, similar to recent findings in feedforward networks. These regimes are functionally distinct. Oblique networks are more heterogeneous and suppress noise in their output directions. They are furthermore more robust to perturbations along the output directions. Crucially, the oblique regime is specific to recurrent (but not feedforward) networks, arising from dynamical stability considerations. Finally, we show that tendencies toward the aligned or the oblique regime can be dissociated in neural recordings. Altogether, our results open a new perspective for interpreting neural activity by relating network dynamics and their output.