Interacting rhythms enhance sensitivity of target detection in a fronto-parietal computational model of visual attention
Abstract
Even during sustained attention, enhanced processing of attended stimuli waxes and wanes rhythmically, with periods of enhanced and relatively diminished visual processing (and subsequent target detection) alternating at 4 or 8 Hz in a sustained visual attention task. These alternating attentional states occur alongside alternating dynamical states, in which lateral intraparietal cortex (LIP), the frontal eye field (FEF), and the mediodorsal pulvinar (mdPul) exhibit different activity and functional connectivity at α, β and γ frequencies-rhythms associated with visual processing, working memory, and motor suppression. To assess whether and how these multiple interacting rhythms contribute to periodicity in attention, we propose a detailed computational model of FEF and LIP. When driven by θ-rhythmic inputs simulating experimentally-observed mdPul activity, this model reproduced the rhythmic dynamics and behavioral consequences of observed attentional states, revealing that the frequencies and mechanisms of the observed rhythms allow for peak sensitivity in visual target detection while maintaining functional flexibility.
Data availability
The current manuscript is a computational study, so no data have been generated for this manuscript. Modelling code is available on the ModelDB open repositories.
Article and author information
Author details
Funding
National Institutes of Health (P50 MH109429)
- Ian C Fiebelkorn
- Sabine Kastner
- Nancy J Kopell
- Benjamin Rafael Pittman-Polletta PhD
National Institute of Mental Health (RO1-MH64043)
- Ian C Fiebelkorn
- Sabine Kastner
National Eye Institute (RO1-EY017699)
- Ian C Fiebelkorn
- Sabine Kastner
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
- Saskia Haegens, Columbia University College of Physicians and Surgeons, United States
Version history
- Preprint posted: February 18, 2021 (view preprint)
- Received: February 19, 2021
- Accepted: January 12, 2023
- Accepted Manuscript published: January 31, 2023 (version 1)
- Version of Record published: April 25, 2023 (version 2)
Copyright
© 2023, Aussel et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 832
- views
-
- 184
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or ‘iconic’ memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store.
-
- Neuroscience
Our ability to recall details from a remembered image depends on a single mechanism that is engaged from the very moment the image disappears from view.