Bottom-up and top-down computations in word- and face-selective cortex
Abstract
The ability to read a page of text or recognize a person's face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.
Article and author information
Author details
Funding
McDonnell Center for Systems Neuroscience
- Kendrick N Kay
Washington University in St. Louis
- Kendrick N Kay
National Science Foundation (BCS-1551330)
- Jason D Yeatman
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: Informed written consent was obtained from all subjects, and the experimental protocol was approved by the Washington University in St. Louis Institutional Review Board and the University of Minnesota Institutional Review Board.
Copyright
© 2017, Kay & Yeatman
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,453
- views
-
- 803
- downloads
-
- 130
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.