Visual System: Wiring up for controlled flight
While birds, bats and insects make flying look easy, this skill is anything but. As animals launch themselves into flight, a world that was once static, slow and fairly predictable is now suddenly fast and roaring. Suspended in air, which itself is in motion and often turbulent, animals must rely on what they see around them to work out where they want to go, and stabilize their bodies relative to the ground and their surroundings.
To achieve this, animals rely on optic flow: this is the apparent motion of objects in their visual field as they travel through an environment, such as the landscape swooshing past a bird as it flies. Optic flow is key to successful flight, as it allows animals to maintain a steady altitude and to negotiate obstacles while rapidly zooming through complex landscapes, such as a forest full of branches. This is why insects often smash into glass windows because these, being transparent, do not trigger any optic flow.
How animals in flight perceive and use optic flow to adjust their own motion relative to their surroundings has fascinated scientists for a very long time. This process starts in the retina with photoreceptors that detect changes in light and relay this visual information – via a network of neurons – to a region of the brain known as the central complex, which directs where the animal will move based on where it is in space (Rivera-Alba et al., 2011; Heinze, 2021; Seelig and Jayaraman, 2015; Franconville et al., 2018; Hulse et al., 2021). One way to better understand how this circuit of neurons supports flight is to investigate how they are wired together in the brain of the fruit fly Drosophila melanogaster, which has been imaged to a nanometre resolution using electron microscopy (Zheng et al., 2018).
Early studies of the fruit fly optic lobe – where visual information is first processed – revealed a modular arrangement with repeated vertical cartridges of neurons, one for each unit (known as an omatidium) of the insect’s eye (Meinertzhagen and O’Neil, 1991; Nern et al., 2015). Each cartridge is analogous to a single camera pixel. But these cartridges are not passive, each one computes changes in light intensity over time and, together with inputs from adjacent cartridges, the direction of motion of objects at a specific point in the fly’s field of view (Behnia et al., 2014; Maisak et al., 2013; Haag et al., 2016; Hassenstein and Reichardt, 1956; Takemura et al., 2013). While neural circuits spanning across adjacent cartridges largely consist of modular repeats, there are differences that contribute to their visual processing properties (Cornean et al., 2024).
A class of neurons in the optic lobe called Lobula Plate Tangential cells (LTPs for short) have been shown to process visual information related to movement. LTPs receive inputs from multiple cartridges, allowing them to relay to the central brain a general picture of how objects are moving across the fly’s whole field of view. Most studies have focused on LTPs that process vertical or horizontal motion (which is needed, for example, to detect drift from wind gusts). However, there is also a larger population of LTPs that are sensitive to optic flow, which are less well defined due to them being more challenging to study. Now, in eLife, Michael Reiser and colleagues – including Arthur Zhao as first author – report the location and connections of all LTPs in the visual system to better understand how flies process optic flow (Zhao et al., 2024).
The team mapped the complete set of LTPs in the fly’s eye from electron microscopy data. They then combined this information with an existing map of all other neurons in the optic lobe to determine the input field and optic flow properties of each LTP. While some LTPs were already known to encode horizontal, vertical or rotational movement, this work means that now all LTPs in the fruit fly brain have a predicted function. Zhao et al. (who are based at the Janelia Research Campus, Champalimaud Centre for the Unknown, and University of Vermont) also determined which neurotransmitters each LTP released by directly analysing the anatomical features of synapses observed in electron microscopy images (Eckstein et al., 2020).
Zhao et al. followed the trajectory of the LTP axons as they entered the central brain and identified six target regions, which may each control different types of movement response. The interconnectivity among LTPs, and their convergence into one of the six brain regions, also provides further evidence that the brain can compose more complex optic flow fields than those sensed individually by each LTP.
While the fruit fly is an excellent laboratory animal for studying the brain and behaviour (Pfeiffer et al., 2010; Dorkenwald et al., 2023), much insight into the visual control of flight came from studies in the much larger blowfly. Indeed, electrophysiological recordings of individual LTPs in blowflies yielded early insights into the properties of visual circuits in insects (Borst and Egelhaaf, 1990). Intriguingly, Zhao et al. report having found 58 LTPs in each brain hemisphere, a number almost identical to the ~60 reported in the blowfly. Far from a coincidence, this similarity reveals a fundamental arrangement of neurons in the visual system across fly species. This suggests that although a larger retina may provide higher visual acuity, the central brain will still extract a similar quality of optic flow information across its visual field regardless of the dimensions of the retina. Whether there is a limit to how reduced the retina can be before optic flow information is compromised may soon be revealed by connectomic studies in miniature insects such as the fairy wasp Megaphragma sp. (Polilov, 2012).
Nevertheless, the comprehensive map of all LTPs in the fruit fly brain provided by Zhou et al., together with the neurotransmitter signature of each neuron and the complete set of neurons they connect with, sets a new, outstanding baseline for studying how the brain uses optic flow signals to drive behaviour.
References
-
Heterogeneity of synaptic connectivity in the fly visual systemNature Communications 15:1570.https://doi.org/10.1038/s41467-024-45971-z
-
Systemtheoretische Analyse der Zeit-, Reihenfolgen- und Vorzeichenauswertung bei der Bewegungsperzeption des Rüsselkäfers ChlorophanusZeitschrift Für Naturforschung B 11:513–524.https://doi.org/10.1515/znb-1956-9-1004
-
Synaptic organization of columnar elements in the lamina of the wild type in Drosophila melanogasterThe Journal of Comparative Neurology 305:232–263.https://doi.org/10.1002/cne.903050206
-
The smallest insects evolve anucleate neuronsArthropod Structure & Development 41:29–34.https://doi.org/10.1016/j.asd.2011.09.001
Article and author information
Author details
Publication history
Copyright
© 2024, Cardona
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 530
- views
-
- 49
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, the inherent complexity of these biological processes makes the construction and reuse of biologically detailed models challenging. A wide range of tools have been developed to aid their construction and simulation, but differences in design and internal representation act as technical barriers to those who wish to use data-driven models in their research workflows. NeuroML, a model description language for computational neuroscience, was developed to address this fragmentation in modeling tools. Since its inception, NeuroML has evolved into a mature community standard that encompasses a wide range of model types and approaches in computational neuroscience. It has enabled the development of a large ecosystem of interoperable open-source software tools for the creation, visualization, validation, and simulation of data-driven models. Here, we describe how the NeuroML ecosystem can be incorporated into research workflows to simplify the construction, testing, and analysis of standardized models of neural systems, and supports the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles, thus promoting open, transparent and reproducible science.
-
- Neuroscience
The central amygdala (CeA) has emerged as an important brain region for regulating both negative (fear and anxiety) and positive (reward) affective behaviors. The CeA has been proposed to encode affective information in the form of valence (whether the stimulus is good or bad) or salience (how significant is the stimulus), but the extent to which these two types of stimulus representation occur in the CeA is not known. Here, we used single cell calcium imaging in mice during appetitive and aversive conditioning and found that majority of CeA neurons (~65%) encode the valence of the unconditioned stimulus (US) with a smaller subset of cells (~15%) encoding the salience of the US. Valence and salience encoding of the conditioned stimulus (CS) was also observed, albeit to a lesser extent. These findings show that the CeA is a site of convergence for encoding oppositely valenced US information.