Spatiotemporal neural dynamics of object recognition under uncertainty in humans
Abstract
While there is a wealth of knowledge about core object recognition-our ability to recognize clear, high-contrast object images, how the brain accomplishes object recognition tasks under increased uncertainty remains poorly understood. We investigated the spatiotemporal neural dynamics underlying object recognition under increased uncertainty by combining MEG and 7 Tesla fMRI in humans during a threshold-level object recognition task. We observed an early, parallel rise of recognition-related signals across ventral visual and frontoparietal regions that preceded the emergence of category-related information. Recognition-related signals in ventral visual regions were best explained by a two-state representational format whereby brain activity bifurcated for recognized and unrecognized images. By contrast, recognition-related signals in frontoparietal regions exhibited a reduced representational space for recognized images, yet with sharper category information. These results provide a spatiotemporally resolved view of neural activity supporting object recognition under uncertainty, revealing a pattern distinct from that underlying core object recognition.
Data availability
The analysis code, data and code to reproduce all figures can be downloaded athttps://github.com/BiyuHeLab/HLTP_Fusion_Wu2022/tree/submission
Article and author information
Author details
Funding
National Institutes of Health (R01EY032085)
- Biyu J He
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: Data collection procedures followed protocols approved by the institutional review boards of the intramural research program of NINDS/NIH (protocol #14 N-0002) and NYU Grossman School of Medicine (protocol s15-01323). All participants provided written informed consent.
Copyright
© 2023, Wu et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,458
- views
-
- 203
- downloads
-
- 4
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
This study investigates failures in conscious access resulting from either weak sensory input (perceptual impairments) or unattended input (attentional impairments). Participants viewed a Kanizsa stimulus with or without an illusory triangle within a rapid serial visual presentation of distractor stimuli. We designed a novel Kanizsa stimulus that contained additional ancillary features of different complexity (local contrast and collinearity) that were independently manipulated. Perceptual performance on the Kanizsa stimulus (presence vs. absence of an illusion) was equated between the perceptual (masking) and attentional (attentional blink) manipulation to circumvent common confounds related to conditional differences in task performance. We trained and tested classifiers on electroencephalogram (EEG) data to reflect the processing of specific stimulus features, with increasing levels of complexity. We show that late stages of processing (~200–250 ms), reflecting the integration of complex stimulus features (collinearity, illusory triangle), were impaired by masking but spared by the attentional blink. In contrast, decoding of local contrast (the spatial arrangement of stimulus features) was observed early in time (~80 ms) and was left largely unaffected by either manipulation. These results replicate previous work showing that feedforward processing is largely preserved under both perceptual and attentional impairments. Crucially, however, under matched levels of performance, only attentional impairments left the processing of more complex visual features relatively intact, likely related to spared lateral and local feedback processes during inattention. These findings reveal distinct neural mechanisms associated with perceptual and attentional impairments and thus contribute to a comprehensive understanding of distinct neural stages leading to conscious access.
-
- Neuroscience
Although recent studies suggest that activity in the motor cortex, in addition to generating motor outputs, receives substantial information regarding sensory inputs, it is still unclear how sensory context adjusts the motor commands. Here, we recorded population neural activity in the motor cortex via microelectrode arrays while monkeys performed flexible manual interceptions of moving targets. During this task, which requires predictive sensorimotor control, the activity of most neurons in the motor cortex encoding upcoming movements was influenced by ongoing target motion. Single-trial neural states at the movement onset formed staggered orbital geometries, suggesting that target motion modulates peri-movement activity in an orthogonal manner. This neural geometry was further evaluated with a representational model and recurrent neural networks (RNNs) with task-specific input-output mapping. We propose that the sensorimotor dynamics can be derived from neuronal mixed sensorimotor selectivity and dynamic interaction between modulations.