Filopodial dynamics and growth cone stabilization in Drosophila visual circuit development
Abstract
Filopodial dynamics are thought to control growth cone guidance, but the types and roles of growth cone dynamics underlying neural circuit assembly in a living brain are largely unknown. To address this issue, we have developed long-term, continuous, fast and high-resolution imaging of growth cone dynamics from axon growth to synapse formation in cultured Drosophila brains. Using R7 photoreceptor neurons as a model we show that >90% of the growth cone filopodia exhibit fast, stochastic dynamics that persist despite ongoing stepwise layer formation. Correspondingly, R7 growth cones stabilize early and change their final position by passive dislocation. N-Cadherin controls both fast filopodial dynamics and growth cone stabilization. Surprisingly, loss of N-Cadherin causes no primary targeting defects, but destabilizes R7 growth cones to jump between correct and incorrect layers. Hence, growth cone dynamics can influence wiring specificity without a direct role in target recognition and implement simple rules during circuit assembly.
Article and author information
Author details
Copyright
© 2015, Özel et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,324
- views
-
- 816
- downloads
-
- 72
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Spiny projection neurons (SPNs) in dorsal striatum are often proposed as a locus of reinforcement learning in the basal ganglia. Here, we identify and resolve a fundamental inconsistency between striatal reinforcement learning models and known SPN synaptic plasticity rules. Direct-pathway (dSPN) and indirect-pathway (iSPN) neurons, which promote and suppress actions, respectively, exhibit synaptic plasticity that reinforces activity associated with elevated or suppressed dopamine release. We show that iSPN plasticity prevents successful learning, as it reinforces activity patterns associated with negative outcomes. However, this pathological behavior is reversed if functionally opponent dSPNs and iSPNs, which promote and suppress the current behavior, are simultaneously activated by efferent input following action selection. This prediction is supported by striatal recordings and contrasts with prior models of SPN representations. In our model, learning and action selection signals can be multiplexed without interference, enabling learning algorithms beyond those of standard temporal difference models.
-
- Neuroscience
Deep learning-based methods have advanced animal pose estimation, enhancing accuracy, and efficiency in quantifying animal behavior. However, these methods frequently experience tracking drift, where noise-induced jumps in body point estimates compromise reliability. Here, we present the anti-drift pose tracker (ADPT), a transformer-based tool that mitigates tracking drift in behavioral analysis. Extensive experiments across cross-species datasets—including proprietary mouse and monkey recordings and public Drosophila and macaque datasets—demonstrate that ADPT significantly reduces drift and surpasses existing models like DeepLabCut and SLEAP in accuracy. Moreover, ADPT achieved 93.16% identification accuracy for 10 unmarked mice and 90.36% accuracy for freely interacting unmarked mice, which can be further refined to 99.72%, enhancing both anti-drift performance and pose estimation accuracy in social interactions. With its end-to-end design, ADPT is computationally efficient and suitable for real-time analysis, offering a robust solution for reproducible animal behavior studies. The ADPT code is available at https://github.com/tangguoling/ADPT.