An image reconstruction framework forcharacterizing initial visual encoding

  1. Ling-Qi Zhang  Is a corresponding author
  2. Nicolas P Cottaris
  3. David Brainard
  1. University of Pennsylvania, United States

Abstract

We developed an image-computable observer model of the initial visual encoding that operates on natural image input, based on the framework of Bayesian image reconstruction from the excitations of the retinal cone mosaic. Our model extends previous work on ideal observer analysis and evaluation of performance beyond psychophysical discrimination, takes into account the statistical regularities of the visual environment, and provides a unifying framework for answering a wide range of questions regarding the visual front end. Using the error in the reconstructions as a metric, we analyzed variations of the number of different photoreceptor types on human retina as an optimal design problem. In addition, the reconstructions allow both visualization and quantification of information loss due to physiological optics and cone mosaic sampling, and how these vary with eccentricity. Furthermore, in simulations of color deficiencies and interferometric experiments, we found that the reconstructed images provide a reasonable proxy for modeling subjects' percepts. Lastly, we used the reconstruction-based observer for the analysis of psychophysical threshold, and found notable interactions between spatial frequency and chromatic direction in the resulting spatial contrast sensitivity function. Our method is widely applicable to experiments and applications in which the initial visual encoding plays an important role.

Data availability

The MATLAB code used for this paper is available at: https://github.com/isetbio/ISETImagePipelineIn addition, the curated RGB and hyperspectral image datasets, parameters used in the simulation including display and cone mosaic setup, as well as the intermediate results such as the learned sparse priors, likelihood functions (i.e., render matrices), are available through: https://tinyurl.com/26r92c8y

The following previously published data sets were used

Article and author information

Author details

  1. Ling-Qi Zhang

    Department of Psychology, University of Pennsylvania, Philadelphia, United States
    For correspondence
    lingqiz@sas.upenn.edu
    Competing interests
    Ling-Qi Zhang, Funding provided by Facebook Reality Labs.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8468-7927
  2. Nicolas P Cottaris

    Department of Psychology, University of Pennsylvania, Philadelphia, United States
    Competing interests
    Nicolas P Cottaris, Funding provided by Facebook Reality Labs.
  3. David Brainard

    Department of Psychology, University of Pennsylvania, Philadelphia, United States
    Competing interests
    David Brainard, Funding provided by Facebook Reality Labs.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9827-543X

Funding

Facebook Reality Labs

  • Ling-Qi Zhang
  • Nicolas P Cottaris
  • David Brainard

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2022, Zhang et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,164
    views
  • 326
    downloads
  • 7
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Ling-Qi Zhang
  2. Nicolas P Cottaris
  3. David Brainard
(2022)
An image reconstruction framework forcharacterizing initial visual encoding
eLife 11:e71132.
https://doi.org/10.7554/eLife.71132

Share this article

https://doi.org/10.7554/eLife.71132

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Cesare V Parise, Marc O Ernst
    Research Article

    Audiovisual information reaches the brain via both sustained and transient input channels, representing signals’ intensity over time or changes thereof, respectively. To date, it is unclear to what extent transient and sustained input channels contribute to the combined percept obtained through multisensory integration. Based on the results of two novel psychophysical experiments, here we demonstrate the importance of the transient (instead of the sustained) channel for the integration of audiovisual signals. To account for the present results, we developed a biologically inspired, general-purpose model for multisensory integration, the multisensory correlation detectors, which combines correlated input from unimodal transient channels. Besides accounting for the results of our psychophysical experiments, this model could quantitatively replicate several recent findings in multisensory research, as tested against a large collection of published datasets. In particular, the model could simultaneously account for the perceived timing of audiovisual events, multisensory facilitation in detection tasks, causality judgments, and optimal integration. This study demonstrates that several phenomena in multisensory research that were previously considered unrelated, all stem from the integration of correlated input from unimodal transient channels.

    1. Computational and Systems Biology
    Franck Simon, Maria Colomba Comes ... Herve Isambert
    Tools and Resources

    Live-cell microscopy routinely provides massive amounts of time-lapse images of complex cellular systems under various physiological or therapeutic conditions. However, this wealth of data remains difficult to interpret in terms of causal effects. Here, we describe CausalXtract, a flexible computational pipeline that discovers causal and possibly time-lagged effects from morphodynamic features and cell–cell interactions in live-cell imaging data. CausalXtract methodology combines network-based and information-based frameworks, which is shown to discover causal effects overlooked by classical Granger and Schreiber causality approaches. We showcase the use of CausalXtract to uncover novel causal effects in a tumor-on-chip cellular ecosystem under therapeutically relevant conditions. In particular, we find that cancer-associated fibroblasts directly inhibit cancer cell apoptosis, independently from anticancer treatment. CausalXtract uncovers also multiple antagonistic effects at different time delays. Hence, CausalXtract provides a unique computational tool to interpret live-cell imaging data for a range of fundamental and translational research applications.