Feedback contribution to surface motion perception in the human early visual cortex
Abstract
Human visual surface perception has neural correlates in early visual cortex, but the role of feedback during surface segmentation in human early visual cortex remains unknown. Feedback projections preferentially enter superficial and deep anatomical layers, which provides a hypothesis for the cortical depth distribution of fMRI activity related to feedback. Using ultra-high field fMRI, we report a depth distribution of activation in line with feedback during the (illusory) perception of surface motion. Our results fit with a signal re-entering in superficial depths of V1, followed by a feedforward sweep of the re-entered information through V2 and V3. The magnitude and sign of the BOLD response strongly depended on the presence of texture in the background, and was additionally modulated by the presence of illusory motion perception compatible with feedback. In summary, the present study demonstrates the potential of depth-resolved fMRI in tackling biomechanical questions on perception.
Data availability
The fMRI dataset, experimental stimuli, and analysis code are publicly available. The fMRI dataset is available on Zenodo (https://doi.org/10.5281/zenodo.3366301). The software used for the presentation of retinotopic mapping stimuli, and for the corresponding analysis, is available on github (https://github.com/ingo-m/pyprf). Example videos of the main experimental stimuli are available on Zenodo (https://doi.org/10.5281/zenodo.2583017). If you would like to reproduce the experimental stimuli, the respective PsychoPy code can be found on github (https://github.com/ingo-m/PacMan/tree/master/stimuli/experiment). The respective repository also contains the analysis code and a brief description how to reproduce the analysis (https://github.com/ingo-m/PacMan). High-level visualisations (e.g. cortical depth profiles & signal timecourses) and group-level statistical tests are implemented in a separate repository (https://github.com/ingo-m/py_depthsampling/tree/PacMan).
-
Dataset: Feedback contribution to surface motion perception in the human early visual cortexZenodo, doi.org/10.5281/zenodo.3366301.
Article and author information
Author details
Funding
Nederlandse Organisatie voor Wetenschappelijk Onderzoek (452-11-002)
- Kâmil Uludağ
Nederlandse Organisatie voor Wetenschappelijk Onderzoek (406-14-085)
- Ingo Marquardt
- Kâmil Uludağ
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
- Tobias H Donner, University Medical Center Hamburg-Eppendorf, Germany
Ethics
Human subjects: Healthy participants gave informed consent before the experiment, and the study protocol was approved by the local ethics committee of the Faculty for Psychology & Neuroscience, Maastricht University. (reference number: ERCPN 180_03_06_2017 ).
Version history
- Received: August 8, 2019
- Accepted: June 3, 2020
- Accepted Manuscript published: June 4, 2020 (version 1)
- Accepted Manuscript updated: June 9, 2020 (version 2)
- Version of Record published: June 24, 2020 (version 3)
Copyright
© 2020, Marquardt et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,362
- views
-
- 191
- downloads
-
- 7
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (‘invariance’), represented in non-interfering subspaces of population activity (‘factorization’) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.
-
- Neuroscience
The sensorimotor system can recalibrate itself without our conscious awareness, a type of procedural learning whose computational mechanism remains undefined. Recent findings on implicit motor adaptation, such as over-learning from small perturbations and fast saturation for increasing perturbation size, challenge existing theories based on sensory errors. We argue that perceptual error, arising from the optimal combination of movement-related cues, is the primary driver of implicit adaptation. Central to our theory is the increasing sensory uncertainty of visual cues with increasing perturbations, which was validated through perceptual psychophysics (Experiment 1). Our theory predicts the learning dynamics of implicit adaptation across a spectrum of perturbation sizes on a trial-by-trial basis (Experiment 2). It explains proprioception changes and their relation to visual perturbation (Experiment 3). By modulating visual uncertainty in perturbation, we induced unique adaptation responses in line with our model predictions (Experiment 4). Overall, our perceptual error framework outperforms existing models based on sensory errors, suggesting that perceptual error in locating one’s effector, supported by Bayesian cue integration, underpins the sensorimotor system’s implicit adaptation.