Bayesian analysis of retinotopic maps
Abstract
Human visual cortex is organized into multiple retinotopic maps. Characterizing the arrangement of these maps on the cortical surface is essential to many visual neuroscience studies. Typically, maps are obtained by voxel-wise analysis of fMRI data. This method, while useful, maps only a portion of the visual field and is limited by measurement noise and subjective assessment of boundaries. We developed a novel Bayesian mapping approach which combines observation-a subject's retinotopic measurements from small amounts of fMRI time-with a prior-a learned retinotopic atlas. This process automatically draws areal boundaries, corrects discontinuities in the measured maps, and predicts validation data more accurately than an atlas alone or independent datasets alone. This new method can be used to improve the accuracy of retinotopic mapping, to analyze large fMRI datasets automatically, and to quantify differences in map properties as a function of health, development and natural variation between individuals.
Data availability
All data generated or analyzed in this study have been made public on an Open Science Foundation website: https://osf.io/knb5g/Preprocessed MRI data as well as analyses and source code for reproducing figures and performing additional analyses can be found on the Open Science Foundation website https://osf.io/knb5g/.Performing Bayesian inference using your own retinotopic maps.To perform Bayesian inference on a FreeSurfer subject, one can use the neuropythy Python library (https://github.com/noahbenson/neuropythy). For convenience, this library has also been packaged into a Docker container that is freely available on Docker Hub (https://hub.docker.com/r/nben/neuropythy).The following command will provide an explanation of how to use the Docker:> docker run -it --rm nben/neuropythy:v0.5.0 register_retinotopy --helpDetailed instructions on how to use the tools documented in this paper are included in the Open Science Foundation website mentioned above.
-
Bayesian Models of Human Retinotopic OrganizationOpen Science Framework, osf.io/knb5g/.
Article and author information
Author details
Funding
National Eye Institute (R01 EY027401)
- Jonathan Winawer
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: This study was conducted with the approval of the New York University Institutional Review Board (IRB-FY2016-363) and in accordance with the Declaration of Helsinki. Informed consent was obtained for all subjects.
Copyright
© 2018, Benson & Winawer
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 5,544
- views
-
- 567
- downloads
-
- 133
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Two-photon (2P) fluorescence imaging through gradient index (GRIN) lens-based endoscopes is fundamental to investigate the functional properties of neural populations in deep brain circuits. However, GRIN lenses have intrinsic optical aberrations, which severely degrade their imaging performance. GRIN aberrations decrease the signal-to-noise ratio (SNR) and spatial resolution of fluorescence signals, especially in lateral portions of the field-of-view (FOV), leading to restricted FOV and smaller number of recorded neurons. This is especially relevant for GRIN lenses of several millimeters in length, which are needed to reach the deeper regions of the rodent brain. We have previously demonstrated a novel method to enlarge the FOV and improve the spatial resolution of 2P microendoscopes based on GRIN lenses of length <4.1 mm (Antonini et al., 2020). However, previously developed microendoscopes were too short to reach the most ventral regions of the mouse brain. In this study, we combined optical simulations with fabrication of aspherical polymer microlenses through three-dimensional (3D) microprinting to correct for optical aberrations in long (length >6 mm) GRIN lens-based microendoscopes (diameter, 500 µm). Long corrected microendoscopes had improved spatial resolution, enabling imaging in significantly enlarged FOVs. Moreover, using synthetic calcium data we showed that aberration correction enabled detection of cells with higher SNR of fluorescent signals and decreased cross-contamination between neurons. Finally, we applied long corrected microendoscopes to perform large-scale and high-precision recordings of calcium signals in populations of neurons in the olfactory cortex, a brain region laying approximately 5 mm from the brain surface, of awake head-fixed mice. Long corrected microendoscopes are powerful new tools enabling population imaging with unprecedented large FOV and high spatial resolution in the most ventral regions of the mouse brain.
-
- Neuroscience
Although recent studies suggest that activity in the motor cortex, in addition to generating motor outputs, receives substantial information regarding sensory inputs, it is still unclear how sensory context adjusts the motor commands. Here, we recorded population neural activity in the motor cortex via microelectrode arrays while monkeys performed flexible manual interceptions of moving targets. During this task, which requires predictive sensorimotor control, the activity of most neurons in the motor cortex encoding upcoming movements was influenced by ongoing target motion. Single-trial neural states at the movement onset formed staggered orbital geometries, suggesting that target motion modulates peri-movement activity in an orthogonal manner. This neural geometry was further evaluated with a representational model and recurrent neural networks (RNNs) with task-specific input-output mapping. We propose that the sensorimotor dynamics can be derived from neuronal mixed sensorimotor selectivity and dynamic interaction between modulations.