Gap junction networks in mushroom bodies participate in visual learning and memory in Drosophila
Abstract
Gap junctions are widely distributed in the brains across species and play essential roles in neural information processing. However, the role of gap junctions in insect cognition remains poorly understood. Using a flight simulator paradigm and genetic tools, we found that gap junctions are present in Drosophila Kenyon cells (KCs), the major neurons of the mushroom bodies (MBs), and showed that they play an important role in visual learning and memory. Using a dye coupling approach, we determined the distribution of gap junctions in KCs. Furthermore, we identified a single pair of MB output neurons (MBONs) that possess a gap junction connection to KCs, and provide strong evidence that this connection is also required for visual learning and memory. Together, our results reveal gap junction networks in KCs and the KC-MBON circuit, and bring new insight into the synaptic network underlying fly's visual learning and memory.
Article and author information
Author details
Copyright
© 2016, Liu et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,253
- views
-
- 972
- downloads
-
- 1
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
This study investigates failures in conscious access resulting from either weak sensory input (perceptual impairments) or unattended input (attentional impairments). Participants viewed a Kanizsa stimulus with or without an illusory triangle within a rapid serial visual presentation of distractor stimuli. We designed a novel Kanizsa stimulus that contained additional ancillary features of different complexity (local contrast and collinearity) that were independently manipulated. Perceptual performance on the Kanizsa stimulus (presence vs. absence of an illusion) was equated between the perceptual (masking) and attentional (attentional blink) manipulation to circumvent common confounds related to conditional differences in task performance. We trained and tested classifiers on electroencephalogram (EEG) data to reflect the processing of specific stimulus features, with increasing levels of complexity. We show that late stages of processing (~200–250 ms), reflecting the integration of complex stimulus features (collinearity, illusory triangle), were impaired by masking but spared by the attentional blink. In contrast, decoding of local contrast (the spatial arrangement of stimulus features) was observed early in time (~80 ms) and was left largely unaffected by either manipulation. These results replicate previous work showing that feedforward processing is largely preserved under both perceptual and attentional impairments. Crucially, however, under matched levels of performance, only attentional impairments left the processing of more complex visual features relatively intact, likely related to spared lateral and local feedback processes during inattention. These findings reveal distinct neural mechanisms associated with perceptual and attentional impairments and thus contribute to a comprehensive understanding of distinct neural stages leading to conscious access.
-
- Neuroscience
Although recent studies suggest that activity in the motor cortex, in addition to generating motor outputs, receives substantial information regarding sensory inputs, it is still unclear how sensory context adjusts the motor commands. Here, we recorded population neural activity in the motor cortex via microelectrode arrays while monkeys performed flexible manual interceptions of moving targets. During this task, which requires predictive sensorimotor control, the activity of most neurons in the motor cortex encoding upcoming movements was influenced by ongoing target motion. Single-trial neural states at the movement onset formed staggered orbital geometries, suggesting that target motion modulates peri-movement activity in an orthogonal manner. This neural geometry was further evaluated with a representational model and recurrent neural networks (RNNs) with task-specific input-output mapping. We propose that the sensorimotor dynamics can be derived from neuronal mixed sensorimotor selectivity and dynamic interaction between modulations.