Multisensory integration in the developing tectum is constrained by the balance of excitation and inhibition

  1. Daniel L Felch
  2. Arseny S Khakhalin
  3. Carlos D Aizenman  Is a corresponding author
  1. Brown University, United States
  2. Bard College, United States

Abstract

Multisensory integration (MSI) is the process that allows the brain to bind together spatiotemporally congruent inputs from different sensory modalities to produce single salient representations. While the phenomenology of MSI in vertebrate brains is well described, relatively little is known about cellular and synaptic mechanisms underlying this phenomenon. Here we use an isolated brain preparation to describe cellular mechanisms underlying development of MSI between visual and mechanosensory inputs in the optic tectum of Xenopus tadpoles. We find MSI is highly dependent on the temporal interval between crossmodal stimulus pairs. Over a key developmental period, the temporal window for MSI significantly narrows and is selectively tuned to specific interstimulus intervals. These changes in MSI correlate with developmental increases in evoked synaptic inhibition, and inhibitory blockade reverses observed developmental changes in MSI. We propose a model in which development of recurrent inhibition mediates development of temporal aspects of MSI in the tectum.

Article and author information

Author details

  1. Daniel L Felch

    Department of Neuroscience, Brown University, Providence, United States
    Competing interests
    The authors declare that no competing interests exist.
  2. Arseny S Khakhalin

    Department of Biology, Bard College, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Carlos D Aizenman

    Department of Neuroscience, Brown University, Providence, United States
    For correspondence
    Carlos_Aizenman@brown.edu
    Competing interests
    The authors declare that no competing interests exist.

Ethics

Animal experimentation: The Brown University Institutional Animal Care and Use Committee (IACUC) approved all handling of animals in accordance with National Institutes of Health (NIH) guidelines. Experiments were performed under IACUC protocol #1308000008C002, most recently renewed August 12, 2015.

Copyright

© 2016, Felch et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,070
    views
  • 543
    downloads
  • 25
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Daniel L Felch
  2. Arseny S Khakhalin
  3. Carlos D Aizenman
(2016)
Multisensory integration in the developing tectum is constrained by the balance of excitation and inhibition
eLife 5:e15600.
https://doi.org/10.7554/eLife.15600

Share this article

https://doi.org/10.7554/eLife.15600

Further reading

    1. Neuroscience
    Magdalena Solyga, Georg B Keller
    Research Article

    Our movements result in predictable sensory feedback that is often multimodal. Based on deviations between predictions and actual sensory input, primary sensory areas of cortex have been shown to compute sensorimotor prediction errors. How prediction errors in one sensory modality influence the computation of prediction errors in another modality is still unclear. To investigate multimodal prediction errors in mouse auditory cortex, we used a virtual environment to experimentally couple running to both self-generated auditory and visual feedback. Using two-photon microscopy, we first characterized responses of layer 2/3 (L2/3) neurons to sounds, visual stimuli, and running onsets and found responses to all three stimuli. Probing responses evoked by audiomotor (AM) mismatches, we found that they closely resemble visuomotor (VM) mismatch responses in visual cortex (V1). Finally, testing for cross modal influence on AM mismatch responses by coupling both sound amplitude and visual flow speed to the speed of running, we found that AM mismatch responses were amplified when paired with concurrent VM mismatches. Our results demonstrate that multimodal and non-hierarchical interactions shape prediction error responses in cortical L2/3.

    1. Neuroscience
    Moritz F Wurm, Doruk Yiğit Erigüç
    Research Article

    Recognizing goal-directed actions is a computationally challenging task, requiring not only the visual analysis of body movements, but also analysis of how these movements causally impact, and thereby induce a change in, those objects targeted by an action. We tested the hypothesis that the analysis of body movements and the effects they induce relies on distinct neural representations in superior and anterior inferior parietal lobe (SPL and aIPL). In four fMRI sessions, participants observed videos of actions (e.g. breaking stick, squashing plastic bottle) along with corresponding point-light-display (PLD) stick figures, pantomimes, and abstract animations of agent–object interactions (e.g. dividing or compressing a circle). Cross-decoding between actions and animations revealed that aIPL encodes abstract representations of action effect structures independent of motion and object identity. By contrast, cross-decoding between actions and PLDs revealed that SPL is disproportionally tuned to body movements independent of visible interactions with objects. Lateral occipitotemporal cortex (LOTC) was sensitive to both action effects and body movements. These results demonstrate that parietal cortex and LOTC are tuned to physical action features, such as how body parts move in space relative to each other and how body parts interact with objects to induce a change (e.g. in position or shape/configuration). The high level of abstraction revealed by cross-decoding suggests a general neural code supporting mechanical reasoning about how entities interact with, and have effects on, each other.