An image reconstruction framework forcharacterizing initial visual encoding

  1. Ling-Qi Zhang  Is a corresponding author
  2. Nicolas P Cottaris
  3. David Brainard
  1. University of Pennsylvania, United States

Abstract

We developed an image-computable observer model of the initial visual encoding that operates on natural image input, based on the framework of Bayesian image reconstruction from the excitations of the retinal cone mosaic. Our model extends previous work on ideal observer analysis and evaluation of performance beyond psychophysical discrimination, takes into account the statistical regularities of the visual environment, and provides a unifying framework for answering a wide range of questions regarding the visual front end. Using the error in the reconstructions as a metric, we analyzed variations of the number of different photoreceptor types on human retina as an optimal design problem. In addition, the reconstructions allow both visualization and quantification of information loss due to physiological optics and cone mosaic sampling, and how these vary with eccentricity. Furthermore, in simulations of color deficiencies and interferometric experiments, we found that the reconstructed images provide a reasonable proxy for modeling subjects' percepts. Lastly, we used the reconstruction-based observer for the analysis of psychophysical threshold, and found notable interactions between spatial frequency and chromatic direction in the resulting spatial contrast sensitivity function. Our method is widely applicable to experiments and applications in which the initial visual encoding plays an important role.

Data availability

The MATLAB code used for this paper is available at: https://github.com/isetbio/ISETImagePipelineIn addition, the curated RGB and hyperspectral image datasets, parameters used in the simulation including display and cone mosaic setup, as well as the intermediate results such as the learned sparse priors, likelihood functions (i.e., render matrices), are available through: https://tinyurl.com/26r92c8y

The following previously published data sets were used

Article and author information

Author details

  1. Ling-Qi Zhang

    Department of Psychology, University of Pennsylvania, Philadelphia, United States
    For correspondence
    lingqiz@sas.upenn.edu
    Competing interests
    Ling-Qi Zhang, Funding provided by Facebook Reality Labs.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8468-7927
  2. Nicolas P Cottaris

    Department of Psychology, University of Pennsylvania, Philadelphia, United States
    Competing interests
    Nicolas P Cottaris, Funding provided by Facebook Reality Labs.
  3. David Brainard

    Department of Psychology, University of Pennsylvania, Philadelphia, United States
    Competing interests
    David Brainard, Funding provided by Facebook Reality Labs.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9827-543X

Funding

Facebook Reality Labs

  • Ling-Qi Zhang
  • Nicolas P Cottaris
  • David Brainard

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Markus Meister, California Institute of Technology, United States

Version history

  1. Preprint posted: June 2, 2021 (view preprint)
  2. Received: June 9, 2021
  3. Accepted: January 14, 2022
  4. Accepted Manuscript published: January 17, 2022 (version 1)
  5. Accepted Manuscript updated: January 18, 2022 (version 2)
  6. Version of Record published: February 15, 2022 (version 3)

Copyright

© 2022, Zhang et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,959
    views
  • 291
    downloads
  • 5
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Ling-Qi Zhang
  2. Nicolas P Cottaris
  3. David Brainard
(2022)
An image reconstruction framework forcharacterizing initial visual encoding
eLife 11:e71132.
https://doi.org/10.7554/eLife.71132

Share this article

https://doi.org/10.7554/eLife.71132

Further reading

    1. Computational and Systems Biology
    2. Genetics and Genomics
    Weichen Song, Yongyong Shi, Guan Ning Lin
    Tools and Resources

    We propose a new framework for human genetic association studies: at each locus, a deep learning model (in this study, Sei) is used to calculate the functional genomic activity score for two haplotypes per individual. This score, defined as the Haplotype Function Score (HFS), replaces the original genotype in association studies. Applying the HFS framework to 14 complex traits in the UK Biobank, we identified 3619 independent HFS–trait associations with a significance of p < 5 × 10−8. Fine-mapping revealed 2699 causal associations, corresponding to a median increase of 63 causal findings per trait compared with single-nucleotide polymorphism (SNP)-based analysis. HFS-based enrichment analysis uncovered 727 pathway–trait associations and 153 tissue–trait associations with strong biological interpretability, including ‘circadian pathway-chronotype’ and ‘arachidonic acid-intelligence’. Lastly, we applied least absolute shrinkage and selection operator (LASSO) regression to integrate HFS prediction score with SNP-based polygenic risk scores, which showed an improvement of 16.1–39.8% in cross-ancestry polygenic prediction. We concluded that HFS is a promising strategy for understanding the genetic basis of human complex traits.

    1. Computational and Systems Biology
    Qianmu Yuan, Chong Tian, Yuedong Yang
    Tools and Resources

    Revealing protein binding sites with other molecules, such as nucleic acids, peptides, or small ligands, sheds light on disease mechanism elucidation and novel drug design. With the explosive growth of proteins in sequence databases, how to accurately and efficiently identify these binding sites from sequences becomes essential. However, current methods mostly rely on expensive multiple sequence alignments or experimental protein structures, limiting their genome-scale applications. Besides, these methods haven’t fully explored the geometry of the protein structures. Here, we propose GPSite, a multi-task network for simultaneously predicting binding residues of DNA, RNA, peptide, protein, ATP, HEM, and metal ions on proteins. GPSite was trained on informative sequence embeddings and predicted structures from protein language models, while comprehensively extracting residual and relational geometric contexts in an end-to-end manner. Experiments demonstrate that GPSite substantially surpasses state-of-the-art sequence-based and structure-based approaches on various benchmark datasets, even when the structures are not well-predicted. The low computational cost of GPSite enables rapid genome-scale binding residue annotations for over 568,000 sequences, providing opportunities to unveil unexplored associations of binding sites with molecular functions, biological processes, and genetic variants. The GPSite webserver and annotation database can be freely accessed at https://bio-web1.nscc-gz.cn/app/GPSite.