A quadratic model captures the human V1 response to variations in chromatic direction and contrast
Abstract
An important goal for vision science is to develop quantitative models of the representation of visual signals at post-receptoral sites. To this end, we develop the quadratic color model (QCM) and examine its ability to account for the BOLD fMRI response in human V1 to spatially-uniform, temporal chromatic modulations that systematically vary in chromatic direction and contrast. We find that the QCM explains the same, cross-validated variance as a conventional general linear model, with far fewer free parameters. The QCM generalizes to allow prediction of V1 responses to a large range of modulations. We replicate the results for each subject and find good agreement across both replications and subjects. We find that within the LM cone contrast plane, V1 is most sensitive to L-M contrast modulations and least sensitive to L+M contrast modulations. Within V1, we observe little to no change in chromatic sensitivity as a function of eccentricity.
Data availability
The raw fMRI data from our experiment have been deposited to OpenNeuro, under the doi:10.18112/openneuro.ds003752.v1.0.0.
-
LFContrastOpenNeuro, doi:10.18112/openneuro.ds003752.v1.0.0.
Article and author information
Author details
Funding
National Science Foundation (DGE-1845298)
- Michael A Barnett
National Institutes of Health (RO1 EY10016)
- David Brainard
National Institutes of Health (Core GrantP30 EY001583)
- David Brainard
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: The research was approved by the University of Pennsylvania Institutional Review Board (Protocol: Photoreceptor directed light modulation 817774). All subjects gave informed written consent and were financially compensated for their participation.
Copyright
© 2021, Barnett et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 638
- views
-
- 95
- downloads
-
- 6
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The axon initial segment (AIS) constitutes not only the site of action potential initiation, but also a hub for activity-dependent modulation of output generation. Recent studies shedding light on AIS function used predominantly post-hoc approaches since no robust murine in vivo live reporters exist. Here, we introduce a reporter line in which the AIS is intrinsically labeled by an ankyrin-G-GFP fusion protein activated by Cre recombinase, tagging the native Ank3 gene. Using confocal, superresolution, and two-photon microscopy as well as whole-cell patch-clamp recordings in vitro, ex vivo, and in vivo, we confirm that the subcellular scaffold of the AIS and electrophysiological parameters of labeled cells remain unchanged. We further uncover rapid AIS remodeling following increased network activity in this model system, as well as highly reproducible in vivo labeling of AIS over weeks. This novel reporter line allows longitudinal studies of AIS modulation and plasticity in vivo in real-time and thus provides a unique approach to study subcellular plasticity in a broad range of applications.
-
- Neuroscience
Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT’s computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT’s performance and simplicity suggest it may be a strong candidate for many BCI applications.