Multisensory integration (MSI) is the process that allows the brain to bind together spatiotemporally congruent inputs from different sensory modalities to produce single salient representations. While the phenomenology of MSI in vertebrate brains is well described, relatively little is known about cellular and synaptic mechanisms underlying this phenomenon. Here we use an isolated brain preparation to describe cellular mechanisms underlying development of MSI between visual and mechanosensory inputs in the optic tectum of Xenopus tadpoles. We find MSI is highly dependent on the temporal interval between crossmodal stimulus pairs. Over a key developmental period, the temporal window for MSI significantly narrows and is selectively tuned to specific interstimulus intervals. These changes in MSI correlate with developmental increases in evoked synaptic inhibition, and inhibitory blockade reverses observed developmental changes in MSI. We propose a model in which development of recurrent inhibition mediates development of temporal aspects of MSI in the tectum.
Animal experimentation: The Brown University Institutional Animal Care and Use Committee (IACUC) approved all handling of animals in accordance with National Institutes of Health (NIH) guidelines. Experiments were performed under IACUC protocol #1308000008C002, most recently renewed August 12, 2015.
- Gary L Westbrook, Vollum Institute, United States
© 2016, Felch et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Downloads (link to download the article as PDF)
Download citations (links to download the citations from this article in formats compatible with various reference manager tools)
Open citations (links to open the citations from this article in various online reference manager services)
Bioimage analysis of fluorescent labels is widely used in the life sciences. Recent advances in deep learning (DL) allow automating time-consuming manual image analysis processes based on annotated training data. However, manual annotation of fluorescent features with a low signal-to-noise ratio is somewhat subjective. Training DL models on subjective annotations may be instable or yield biased models. In turn, these models may be unable to reliably detect biological effects. An analysis pipeline integrating data annotation, ground truth estimation, and model training can mitigate this risk. To evaluate this integrated process, we compared different DL-based analysis approaches. With data from two model organisms (mice, zebrafish) and five laboratories, we show that ground truth estimation from multiple human annotators helps to establish objectivity in fluorescent feature annotations. Furthermore, ensembles of multiple models trained on the estimated ground truth establish reliability and validity. Our research provides guidelines for reproducible DL-based bioimage analyses.
Using multiple human annotators and ensembles of trained networks can improve the performance of deep-learning methods in research.