Bayesian analysis of retinotopic maps

  1. Noah C Benson  Is a corresponding author
  2. Jonathan Winawer
  1. New York University, United States

Abstract

Human visual cortex is organized into multiple retinotopic maps. Characterizing the arrangement of these maps on the cortical surface is essential to many visual neuroscience studies. Typically, maps are obtained by voxel-wise analysis of fMRI data. This method, while useful, maps only a portion of the visual field and is limited by measurement noise and subjective assessment of boundaries. We developed a novel Bayesian mapping approach which combines observation-a subject's retinotopic measurements from small amounts of fMRI time-with a prior-a learned retinotopic atlas. This process automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation data more accurately than an atlas alone or independent datasets alone. This new method can be used to improve the accuracy of retinotopic mapping, to analyze large fMRI datasets automatically, and to quantify differences in map properties as a function of health, development and natural variation between individuals.

Data availability

All data generated or analyzed in this study have been made public on an Open Science Foundation website: https://osf.io/knb5g/Preprocessed MRI data as well as analyses and source code for reproducing figures and performing additional analyses can be found on the Open Science Foundation website https://osf.io/knb5g/.Performing Bayesian inference using your own retinotopic maps.To perform Bayesian inference on a FreeSurfer subject, one can use the neuropythy Python library (https://github.com/noahbenson/neuropythy). For convenience, this library has also been packaged into a Docker container that is freely available on Docker Hub (https://hub.docker.com/r/nben/neuropythy).The following command will provide an explanation of how to use the Docker:> docker run -it --rm nben/neuropythy:v0.5.0 register_retinotopy --helpDetailed instructions on how to use the tools documented in this paper are included in the Open Science Foundation website mentioned above.

The following data sets were generated

Article and author information

Author details

  1. Noah C Benson

    Department of Psychology, New York University, New York, United States
    For correspondence
    nben@nyu.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-2365-8265
  2. Jonathan Winawer

    Department of Psychology, New York University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7475-5586

Funding

National Eye Institute (R01 EY027401)

  • Jonathan Winawer

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: This study was conducted with the approval of the New York University Institutional Review Board (IRB-FY2016-363) and in accordance with the Declaration of Helsinki. Informed consent was obtained for all subjects.

Copyright

© 2018, Benson & Winawer

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 5,415
    views
  • 549
    downloads
  • 119
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Noah C Benson
  2. Jonathan Winawer
(2018)
Bayesian analysis of retinotopic maps
eLife 7:e40224.
https://doi.org/10.7554/eLife.40224

Share this article

https://doi.org/10.7554/eLife.40224

Further reading

    1. Neuroscience
    Lenia Amaral, Xiaosha Wang ... Ella Striem-Amit
    Research Article

    Research on brain plasticity, particularly in the context of deafness, consistently emphasizes the reorganization of the auditory cortex. But to what extent do all individuals with deafness show the same level of reorganization? To address this question, we examined the individual differences in functional connectivity (FC) from the deprived auditory cortex. Our findings demonstrate remarkable differentiation between individuals deriving from the absence of shared auditory experiences, resulting in heightened FC variability among deaf individuals, compared to more consistent FC in the hearing group. Notably, connectivity to language regions becomes more diverse across individuals with deafness. This does not stem from delayed language acquisition; it is found in deaf native signers, who are exposed to natural language since birth. However, comparing FC diversity between deaf native signers and deaf delayed signers, who were deprived of language in early development, we show that language experience also impacts individual differences, although to a more moderate extent. Overall, our research points out the intricate interplay between brain plasticity and individual differences, shedding light on the diverse ways reorganization manifests among individuals. It joins findings of increased connectivity diversity in blindness and highlights the importance of considering individual differences in personalized rehabilitation for sensory loss.

    1. Computational and Systems Biology
    2. Neuroscience
    Gabriel Loewinger, Erjia Cui ... Francisco Pereira
    Tools and Resources

    Fiber photometry has become a popular technique to measure neural activity in vivo, but common analysis strategies can reduce the detection of effects because they condense within-trial signals into summary measures, and discard trial-level information by averaging across-trials. We propose a novel photometry statistical framework based on functional linear mixed modeling, which enables hypothesis testing of variable effects at every trial time-point, and uses trial-level signals without averaging. This makes it possible to compare the timing and magnitude of signals across conditions while accounting for between-animal differences. Our framework produces a series of plots that illustrate covariate effect estimates and statistical significance at each trial time-point. By exploiting signal autocorrelation, our methodology yields joint 95% confidence intervals that account for inspecting effects across the entire trial and improve the detection of event-related signal changes over common multiple comparisons correction strategies. We reanalyze data from a recent study proposing a theory for the role of mesolimbic dopamine in reward learning, and show the capability of our framework to reveal significant effects obscured by standard analysis approaches. For example, our method identifies two dopamine components with distinct temporal dynamics in response to reward delivery. In simulation experiments, our methodology yields improved statistical power over common analysis approaches. Finally, we provide an open-source package and analysis guide for applying our framework.