A quadratic model captures the human V1 response to variations in chromatic direction and contrast
Abstract
An important goal for vision science is to develop quantitative models of the representation of visual signals at post-receptoral sites. To this end, we develop the quadratic color model (QCM) and examine its ability to account for the BOLD fMRI response in human V1 to spatially-uniform, temporal chromatic modulations that systematically vary in chromatic direction and contrast. We find that the QCM explains the same, cross-validated variance as a conventional general linear model, with far fewer free parameters. The QCM generalizes to allow prediction of V1 responses to a large range of modulations. We replicate the results for each subject and find good agreement across both replications and subjects. We find that within the LM cone contrast plane, V1 is most sensitive to L-M contrast modulations and least sensitive to L+M contrast modulations. Within V1, we observe little to no change in chromatic sensitivity as a function of eccentricity.
Data availability
The raw fMRI data from our experiment have been deposited to OpenNeuro, under the doi:10.18112/openneuro.ds003752.v1.0.0.
-
LFContrastOpenNeuro, doi:10.18112/openneuro.ds003752.v1.0.0.
Article and author information
Author details
Funding
National Science Foundation (DGE-1845298)
- Michael A Barnett
National Institutes of Health (RO1 EY10016)
- David Brainard
National Institutes of Health (Core GrantP30 EY001583)
- David Brainard
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: The research was approved by the University of Pennsylvania Institutional Review Board (Protocol: Photoreceptor directed light modulation 817774). All subjects gave informed written consent and were financially compensated for their participation.
Copyright
© 2021, Barnett et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 633
- views
-
- 94
- downloads
-
- 6
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Learning alters cortical representations and improves perception. Apical tuft dendrites in cortical layer 1, which are unique in their connectivity and biophysical properties, may be a key site of learning-induced plasticity. We used both two-photon and SCAPE microscopy to longitudinally track tuft-wide calcium spikes in apical dendrites of layer 5 pyramidal neurons in barrel cortex as mice learned a tactile behavior. Mice were trained to discriminate two orthogonal directions of whisker stimulation. Reinforcement learning, but not repeated stimulus exposure, enhanced tuft selectivity for both directions equally, even though only one was associated with reward. Selective tufts emerged from initially unresponsive or low-selectivity populations. Animal movement and choice did not account for changes in stimulus selectivity. Enhanced selectivity persisted even after rewards were removed and animals ceased performing the task. We conclude that learning produces long-lasting realignment of apical dendrite tuft responses to behaviorally relevant dimensions of a task.
-
- Neuroscience
Multiplexed error-robust fluorescence in situ hybridization (MERFISH) allows genome-scale imaging of RNAs in individual cells in intact tissues. To date, MERFISH has been applied to image thin-tissue samples of ~10 µm thickness. Here, we present a thick-tissue three-dimensional (3D) MERFISH imaging method, which uses confocal microscopy for optical sectioning, deep learning for increasing imaging speed and quality, as well as sample preparation and imaging protocol optimized for thick samples. We demonstrated 3D MERFISH on mouse brain tissue sections of up to 200 µm thickness with high detection efficiency and accuracy. We anticipate that 3D thick-tissue MERFISH imaging will broaden the scope of questions that can be addressed by spatial genomics.