Pre-saccadic remapping relies on dynamics of spatial attention
Abstract
Each saccade shifts the projections of the visual scene on the retina. It has been proposed that the receptive fields of neurons in oculomotor areas are predictively remapped to account for these shifts. While remapping of the whole visual scene seems prohibitively complex, selection by attention may limit these processes to a subset of attended locations. Because attentional selection consumes time, remapping of attended locations should evolve in time, too. In our study, we cued a spatial location by presenting an attention-capturing cue at different times before a saccade and constructed maps of attentional allocation across the visual field. We observed no remapping of attention when the cue appeared shortly before saccade. In contrast, when the cue appeared sufficiently early before saccade, attentional resources were reallocated precisely to the remapped location. Our results show that pre-saccadic remapping takes time to develop suggesting that it relies on the spatial and temporal dynamics of spatial attention.
Data availability
All files are available from the OSF database: URL: https://osf.io/3tru6.
-
Dataset of the Peripheral Remapping and Foveal Remapping of attention tasksOpen Science Framework, osf.io/3tru6.
Article and author information
Author details
Funding
Deutsche Forschungsgemeinschaft (SZ343/1)
- Martin Szinte
Deutsche Forschungsgemeinschaft (RA2191/1-1)
- Dragan Rangelov
Marie Skłodowska-Curie Individual Fellowship (704537)
- Martin Szinte
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: Experiments were designed according to the ethical requirements specified by the Faculty for Psychology and Pedagogics of the Ludwig-Maximilians-Universität München (approval number 13_b_2015) for experiments involving eye tracking. All participants provided written informed consent, including a consent to publish anonymized data.
Copyright
© 2018, Szinte et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,963
- views
-
- 289
- downloads
-
- 34
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The relation between neural activity and behaviorally relevant variables is at the heart of neuroscience research. When strong, this relation is termed a neural representation. There is increasing evidence, however, for partial dissociations between activity in an area and relevant external variables. While many explanations have been proposed, a theoretical framework for the relationship between external and internal variables is lacking. Here, we utilize recurrent neural networks (RNNs) to explore the question of when and how neural dynamics and the network’s output are related from a geometrical point of view. We find that training RNNs can lead to two dynamical regimes: dynamics can either be aligned with the directions that generate output variables, or oblique to them. We show that the choice of readout weight magnitude before training can serve as a control knob between the regimes, similar to recent findings in feedforward networks. These regimes are functionally distinct. Oblique networks are more heterogeneous and suppress noise in their output directions. They are furthermore more robust to perturbations along the output directions. Crucially, the oblique regime is specific to recurrent (but not feedforward) networks, arising from dynamical stability considerations. Finally, we show that tendencies toward the aligned or the oblique regime can be dissociated in neural recordings. Altogether, our results open a new perspective for interpreting neural activity by relating network dynamics and their output.
-
- Neuroscience
This work proposes µGUIDE: a general Bayesian framework to estimate posterior distributions of tissue microstructure parameters from any given biophysical model or signal representation, with exemplar demonstration in diffusion-weighted magnetic resonance imaging. Harnessing a new deep learning architecture for automatic signal feature selection combined with simulation-based inference and efficient sampling of the posterior distributions, µGUIDE bypasses the high computational and time cost of conventional Bayesian approaches and does not rely on acquisition constraints to define model-specific summary statistics. The obtained posterior distributions allow to highlight degeneracies present in the model definition and quantify the uncertainty and ambiguity of the estimated parameters.