Bayesian analysis of retinotopic maps
Abstract
Human visual cortex is organized into multiple retinotopic maps. Characterizing the arrangement of these maps on the cortical surface is essential to many visual neuroscience studies. Typically, maps are obtained by voxel-wise analysis of fMRI data. This method, while useful, maps only a portion of the visual field and is limited by measurement noise and subjective assessment of boundaries. We developed a novel Bayesian mapping approach which combines observation-a subject's retinotopic measurements from small amounts of fMRI time-with a prior-a learned retinotopic atlas. This process automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation data more accurately than an atlas alone or independent datasets alone. This new method can be used to improve the accuracy of retinotopic mapping, to analyze large fMRI datasets automatically, and to quantify differences in map properties as a function of health, development and natural variation between individuals.
Data availability
All data generated or analyzed in this study have been made public on an Open Science Foundation website: https://osf.io/knb5g/Preprocessed MRI data as well as analyses and source code for reproducing figures and performing additional analyses can be found on the Open Science Foundation website https://osf.io/knb5g/.Performing Bayesian inference using your own retinotopic maps.To perform Bayesian inference on a FreeSurfer subject, one can use the neuropythy Python library (https://github.com/noahbenson/neuropythy). For convenience, this library has also been packaged into a Docker container that is freely available on Docker Hub (https://hub.docker.com/r/nben/neuropythy).The following command will provide an explanation of how to use the Docker:> docker run -it --rm nben/neuropythy:v0.5.0 register_retinotopy --helpDetailed instructions on how to use the tools documented in this paper are included in the Open Science Foundation website mentioned above.
-
Bayesian Models of Human Retinotopic OrganizationOpen Science Framework, osf.io/knb5g/.
Article and author information
Author details
Funding
National Eye Institute (R01 EY027401)
- Jonathan Winawer
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: This study was conducted with the approval of the New York University Institutional Review Board (IRB-FY2016-363) and in accordance with the Declaration of Helsinki. Informed consent was obtained for all subjects.
Copyright
© 2018, Benson & Winawer
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 5,365
- views
-
- 546
- downloads
-
- 119
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Developmental Biology
- Neuroscience
We established a volumetric trans-scale imaging system with an ultra-large field-of-view (FOV) that enables simultaneous observation of millions of cellular dynamics in centimeter-wide three-dimensional (3D) tissues and embryos. Using a custom-made giant lens system with a magnification of ×2 and a numerical aperture (NA) of 0.25, and a CMOS camera with more than 100 megapixels, we built a trans-scale scope AMATERAS-2, and realized fluorescence imaging with a transverse spatial resolution of approximately 1.1 µm across an FOV of approximately 1.5×1.0 cm2. The 3D resolving capability was realized through a combination of optical and computational sectioning techniques tailored for our low-power imaging system. We applied the imaging technique to 1.2 cm-wide section of mouse brain, and successfully observed various regions of the brain with sub-cellular resolution in a single FOV. We also performed time-lapse imaging of a 1-cm-wide vascular network during quail embryo development for over 24 hr, visualizing the movement of over 4.0×105 vascular endothelial cells and quantitatively analyzing their dynamics. Our results demonstrate the potential of this technique in accelerating production of comprehensive reference maps of all cells in organisms and tissues, which contributes to understanding developmental processes, brain functions, and pathogenesis of disease, as well as high-throughput quality check of tissues used for transplantation medicine.
-
- Neuroscience
Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT’s computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT’s performance and simplicity suggest it may be a strong candidate for many BCI applications.