Neural dynamics of visual ambiguity resolution by perceptual prior
Abstract
Past experiences have enormous power in shaping our daily perception. Currently, dynamical neural mechanisms underlying this process remain mysterious. Exploiting a dramatic visual phenomenon, where a single experience of viewing a clear image allows instant recognition of a related degraded image, we investigated this question using MEG and 7 Tesla fMRI in humans. We observed that following the acquisition of perceptual priors, different degraded images are represented much more distinctly in neural dynamics starting from ~500 ms after stimulus onset. Content-specific neural activity related to stimulus-feature processing dominated within 300 ms after stimulus onset, while content-specific neural activity related to recognition processing dominated from 500 ms onward. Model-driven MEG-fMRI data fusion revealed the spatiotemporal evolution of neural activities involved in stimulus, attentional, and recognition processing. Together, these findings shed light on how experience shapes perceptual processing across space and time in the brain.
Data availability
All data generated or analysed during this study are included in the manuscript and supporting files.
Article and author information
Author details
Funding
National Institute of Neurological Disorders and Stroke
- Biyu J He
Klingenstein-Simons Neuroscience Fellowship
- Biyu J He
Department of State Fulbright program
- Carlos González-García
National Science Foundation (BCS-1753218)
- Biyu J He
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: The experiment was approved by the Institutional Review Board of the National Institute of Neurological Disorders and Stroke (under protocol #14-N-0002). All subjects provided written informed consent.
Reviewing Editor
- Christian Büchel, University Medical Center Hamburg-Eppendorf, Germany
Version history
- Received: September 9, 2018
- Accepted: February 25, 2019
- Accepted Manuscript published: March 7, 2019 (version 1)
- Version of Record published: March 13, 2019 (version 2)
Copyright
© 2019, Flounders et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,804
- Page views
-
- 381
- Downloads
-
- 20
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The lateral geniculate nucleus (LGN), a retinotopic relay center where visual inputs from the retina are processed and relayed to the visual cortex, has been proposed as a potential target for artificial vision. At present, it is unknown whether optogenetic LGN stimulation is sufficient to elicit behaviorally relevant percepts, and the properties of LGN neural responses relevant for artificial vision have not been thoroughly characterized. Here, we demonstrate that tree shrews pretrained on a visual detection task can detect optogenetic LGN activation using an AAV2-CamKIIα-ChR2 construct and readily generalize from visual to optogenetic detection. Simultaneous recordings of LGN spiking activity and primary visual cortex (V1) local field potentials (LFP) during optogenetic LGN stimulation show that LGN neurons reliably follow optogenetic stimulation at frequencies up to 60 Hz, and uncovered a striking phase locking between the V1 local field potential (LFP) and the evoked spiking activity in LGN. These phase relationships were maintained over a broad range of LGN stimulation frequencies, up to 80 Hz, with spike field coherence values favoring higher frequencies, indicating the ability to relay temporally precise information to V1 using light activation of the LGN. Finally, V1 LFP responses showed sensitivity values to LGN optogenetic activation that were similar to the animal's behavioral performance. Taken together, our findings confirm the LGN as a potential target for visual prosthetics in a highly visual mammal closely related to primates.
-
- Neuroscience
Hippocampal place cell sequences have been hypothesized to serve as diverse purposes as the induction of synaptic plasticity, formation and consolidation of long-term memories, or navigation and planning. During spatial behaviors of rodents, sequential firing of place cells at the theta timescale (known as theta sequences) encodes running trajectories, which can be considered as one-dimensional behavioral sequences of traversed locations. In a two-dimensional space, however, each single location can be visited along arbitrary one-dimensional running trajectories. Thus, a place cell will generally take part in multiple different theta sequences, raising questions about how this two-dimensional topology can be reconciled with the idea of hippocampal sequences underlying memory of (one-dimensional) episodes. Here, we propose a computational model of cornu ammonis 3 (CA3) and dentate gyrus (DG), where sensorimotor input drives the direction-dependent (extrinsic) theta sequences within CA3 reflecting the two-dimensional spatial topology, whereas the intrahippocampal CA3-DG projections concurrently produce intrinsic sequences that are independent of the specific running trajectory. Consistent with experimental data, intrinsic theta sequences are less prominent, but can nevertheless be detected during theta activity, thereby serving as running-direction independent landmark cues. We hypothesize that the intrinsic sequences largely reflect replay and preplay activity during non-theta states.