Human visual surface perception has neural correlates in early visual cortex, but the role of feedback during surface segmentation in human early visual cortex remains unknown. Feedback projections preferentially enter superficial and deep anatomical layers, which provides a hypothesis for the cortical depth distribution of fMRI activity related to feedback. Using ultra-high field fMRI, we report a depth distribution of activation in line with feedback during the (illusory) perception of surface motion. Our results fit with a signal re-entering in superficial depths of V1, followed by a feedforward sweep of the re-entered information through V2 and V3. The magnitude and sign of the BOLD response strongly depended on the presence of texture in the background, and was additionally modulated by the presence of illusory motion perception compatible with feedback. In summary, the present study demonstrates the potential of depth-resolved fMRI in tackling biomechanical questions on perception.
The fMRI dataset, experimental stimuli, and analysis code are publicly available. The fMRI dataset is available on Zenodo (https://doi.org/10.5281/zenodo.3366301). The software used for the presentation of retinotopic mapping stimuli, and for the corresponding analysis, is available on github (https://github.com/ingo-m/pyprf). Example videos of the main experimental stimuli are available on Zenodo (https://doi.org/10.5281/zenodo.2583017). If you would like to reproduce the experimental stimuli, the respective PsychoPy code can be found on github (https://github.com/ingo-m/PacMan/tree/master/stimuli/experiment). The respective repository also contains the analysis code and a brief description how to reproduce the analysis (https://github.com/ingo-m/PacMan). High-level visualisations (e.g. cortical depth profiles & signal timecourses) and group-level statistical tests are implemented in a separate repository (https://github.com/ingo-m/py_depthsampling/tree/PacMan).
Dataset: Feedback contribution to surface motion perception in the human early visual cortexZenodo, doi.org/10.5281/zenodo.3366301.
- Kâmil Uludağ
- Ingo Marquardt
- Kâmil Uludağ
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Human subjects: Healthy participants gave informed consent before the experiment, and the study protocol was approved by the local ethics committee of the Faculty for Psychology & Neuroscience, Maastricht University. (reference number: ERCPN 180_03_06_2017 ).
- Tobias H Donner, University Medical Center Hamburg-Eppendorf, Germany
© 2020, Marquardt et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Dynamics of excitable cells and networks depend on the membrane time constant, set by membrane resistance and capacitance. Whereas pharmacological and genetic manipulations of ionic conductances of excitable membranes are routine in electrophysiology, experimental control over capacitance remains a challenge. Here, we present capacitance clamp, an approach that allows electrophysiologists to mimic a modified capacitance in biological neurons via an unconventional application of the dynamic clamp technique. We first demonstrate the feasibility to quantitatively modulate capacitance in a mathematical neuron model and then confirm the functionality of capacitance clamp in in vitro experiments in granule cells of rodent dentate gyrus with up to threefold virtual capacitance changes. Clamping of capacitance thus constitutes a novel technique to probe and decipher mechanisms of neuronal signaling in ways that were so far inaccessible to experimental electrophysiology.
Goal-oriented navigation is widely understood to depend upon internal maps. Although this may be the case in many settings, humans tend to rely on vision in complex, unfamiliar environments. To study the nature of gaze during visually-guided navigation, we tasked humans to navigate to transiently visible goals in virtual mazes of varying levels of difficulty, observing that they took near-optimal trajectories in all arenas. By analyzing participants’ eye movements, we gained insights into how they performed visually-informed planning. The spatial distribution of gaze revealed that environmental complexity mediated a striking trade-off in the extent to which attention was directed towards two complimentary aspects of the world model: the reward location and task-relevant transitions. The temporal evolution of gaze revealed rapid, sequential prospection of the future path, evocative of neural replay. These findings suggest that the spatiotemporal characteristics of gaze during navigation are significantly shaped by the unique cognitive computations underlying real-world, sequential decision making.