1. Neuroscience
Download icon

Accelerating with FlyBrainLab the discovery of the functional logic of the Drosophila brain in the connectomic and synaptomic era

  1. Aurel A Lazar  Is a corresponding author
  2. Tingkai Liu
  3. Mehmet Kerem Turkcan
  4. Yiyin Zhou
  1. Department of Electrical Engineering, Columbia University, United States
Tools and Resources
  • Cited 0
  • Views 2,176
  • Annotations
Cite this article as: eLife 2021;10:e62362 doi: 10.7554/eLife.62362

Abstract

In recent years, a wealth of Drosophila neuroscience data have become available including cell type and connectome/synaptome datasets for both the larva and adult fly. To facilitate integration across data modalities and to accelerate the understanding of the functional logic of the fruit fly brain, we have developed FlyBrainLab, a unique open-source computing platform that integrates 3D exploration and visualization of diverse datasets with interactive exploration of the functional logic of modeled executable brain circuits. FlyBrainLab’s User Interface, Utilities Libraries and Circuit Libraries bring together neuroanatomical, neurogenetic and electrophysiological datasets with computational models of different researchers for validation and comparison within the same platform. Seeking to transcend the limitations of the connectome/synaptome, FlyBrainLab also provides libraries for molecular transduction arising in sensory coding in vision/olfaction. Together with sensory neuron activity data, these libraries serve as entry points for the exploration, analysis, comparison, and evaluation of circuit functions of the fruit fly brain.

Introduction

The era of connectomics/synaptomics ushered in the advent of large-scale availability of highly complex fruit fly brain data (Chiang et al., 2011; Berck et al., 2016; Takemura et al., 2017a; Scheffer et al., 2020), while simultaneously highlighting the dearth of computational tools with the speed and scale that can be effectively deployed to uncover the functional logic of fly brain circuits. In the early 2000’s, automation tools introduced in computational genomics significantly accelerated the pace of gene discovery from the large amounts of genomic data. Likewise, there is a need to develop tightly integrated computing tools that automate the process of 3D exploration and visualization of fruit fly brain data with the interactive exploration of executable circuits. The fruit fly brain data considered here includes neuroanatomy, genetics, and neurophysiology datasets. Due to space limitations, we mostly focus here on exploring, analyzing, comparing, and evaluating executable circuits informed by wiring diagrams derived from neuroanatomy datasets currently available in public domain.

To meet this challenge, we have built an open-source interactive computing platform called FlyBrainLab. FlyBrainLab is uniquely positioned to accelerate the discovery of the functional logic of the Drosophila brain. It is designed with three main capabilities in mind: (1) 3D exploration and visualization of fruit fly brain data, (2) creation of executable circuits directly from the explored and visualized fly brain data in step (1), and (3) interactive exploration of the functional logic of the executable circuits devised in step (2) (see Figure 1).

FlyBrainLab provides, within a single working environment, (left) 3D exploration and visualization of fruit fly brain data, and (right) creation of executable circuit diagrams from the explored and visualized circuit on the left followed by an interactive exploration of the functional logic of executable circuits.

To achieve tight integration of the three main capabilities sketched in Figure 1 into a single working environment, FlyBrainLab integrates fly brain data in the NeuroArch Database (Givon et al., 2015) and provides circuit execution with the Neurokernel Execution Engine (Givon and Lazar, 2016) (see Figure 2a). The NeuroArch Database stores neuroanatomy datasets provided by for example, FlyCircuit (Chiang et al., 2011), Larva L1EM (Ohyama et al., 2015), the Medulla 7 Column (Takemura et al., 2017a) and Hemibrain (Scheffer et al., 2020), genetics datasets published by for example, FlightLight (Jenett et al., 2012) and FlyCircuit (Chiang et al., 2011), and neurophysiology datasets including the DoOR (Münch and Galizia, 2016) and our own in vivo recordings (Lazar and Yeh, 2020; Kim et al., 2011; Kim et al., 2015). The Neurokernel Execution Engine (see Figure 2a) supports the execution of fruit fly brain circuits on GPUs. Finally, the NeuroMynerva front-end exhibits an integrated 3D graphics user interface (GUI) and provides the user a unified view of data integration and computation (see Figure 2a (top) and Figure 2b). The FlyBrainLab software architecture is depicted in the Appendix 1—figure 1.

The software architecture and the user interface of FlyBrainLab.

(a) The main components of the architecture of FlyBrainLab: (top) NeuroMynerva user-side frontend, (bottom left) NeuroArch Database for storage of fruit fly brain data and executable circuits, (bottom right) Neurokernel Execution Engine for execution of fruit fly brain circuits on GPUs (see also Appendix 1—figure 1 for a schematic diagram of the FlyBrainLab software architecture). (b) NeuroMynerva User Interface. The UI typically consists five blocks, including a (1) NeuroNLP 3D Visualization Window with a search bar for NLP queries, providing capabilities for displaying and interacting with fly brain data such as the morphology of neurons and position of synapses. (2) NeuroGFX Executable Circuits Window, for exploring executable neural circuits with interactive circuit diagrams. (3) Program Execution Window with a built-in Jupyter notebook, executing any Python code including calls to the FlyBrainLab Client (see also Appendix 1.2), for direct access to database queries, visualization, and circuit execution, (4) Info Panel displaying details of highlighted neurons including the origin of data, genetic information, morphometric statistics and synaptic partners, etc. (5) Local File Access Panel with a built-in Jupyter file browser for accessing local files.

To accelerate the generation of executable circuits from fruit fly brain data, NeuroMynerva supports the following workflow.

First, the 3D GUI, called the NeuroNLP window (see Figure 2b, top middle-left), supports the visual exploration of fly brain data, including neuron morphology, synaptome, and connectome from all available data sources, stored in the NeuroArch Database (Givon et al., 2015). With plain English queries (see Figure 2b, top middle-left), a layperson can perform sophisticated database queries with only knowledge of fly brain neuroanatomy (Ukani et al., 2019).

Second, the circuit diagram GUI, called the NeuroGFX window (see Figure 2b, top middle-right) enables the interactive exploration of executable circuits stored in the NeuroArch Database. By retrieving tightly integrated biological data and executable circuit models from the NeuroArch Database, NeuroMynerva supports the interaction and interoperability between the biological circuit (or pathway for short) built for morphological visualization and the executable circuit created and represented as an interactive circuit diagram, and allows them to build on each other. This helps circuit developers to more readily identify the modeling assumptions and the relationship between neuroanatomy, neurocircuitry, and neurocomputation.

Third, the GUIs can operate in tandem with command execution in Jupyter notebooks (see also Figure 2b, bottom center). Consequently, fly brain pathways and circuit diagrams can be equivalently processed using API calls from Python, thereby ensuring the reproducibility of the exploration of similar datasets with minimal modifications. The Neurokernel Execution Engine (Givon and Lazar, 2016) provides circuit execution on multiple computing nodes/GPUs. The tight integration in the database also allows the execution engine to fetch executable circuits directly from the NeuroArch Database. The tight integration between NeuroArch and Neurokernel is reinforced and made user transparent by NeuroMynerva.

Exploration, analysis, execution, comparison, and evaluation of circuit models, either among versions developed by one’s own, or among those published in literature, are often critical steps toward discovering the functional logic of brain circuits. Six types of explorations, analyses, comparisons, and evaluations are of particular interest. First, build and explore the structure of fly brain circuits with English queries (Use Case 1). Second, explore the structure and function of yet to be discovered brain circuits (Use Case 2). Third, interactively explore executable circuit models (Use Case 3). Fourth, starting from a given dataset and after implementing a number of circuit models published in the literature, analyze and compare these under the same evaluation criteria (Use Case 4). Fifth, automate the construction of executable circuit models from datasets gathered by different labs and analyze, compare, and evaluate the different circuit realizations (Use Case 5). Sixth, analyze, compare, and evaluate fruit fly brain circuit models at different developmental stages (Use Case 6).

In what follows, we present results, supported by the FlyBrainLab Circuits Libraries (see Materials and methods), demonstrating the comprehensive exploration, execution, analysis, comparison, and evaluation capability of FlyBrainLab. While our emphasis here is on building executable circuit models informed by the connectome/synaptome of the fruit fly brain, these libraries together with sensory neuron activity data serve as entry points for an in-depth exploration, execution, analysis, comparison, and evaluation of the functional logic of the fruit fly brain.

Results

Use Case 1: building fly brain circuits with english queries

FlyBrainLab is equipped with a powerful and versatile user interface to build fruit fly brain circuits from connectome and synaptome datasets. The interface is designed to accommodate users with widely different expertise, such as neurobiologists, computational neuroscientists or even college or high school students. Knowledge of the nomenclature of the fruit fly brain is assumed.

The simplest way to build a fly brain circuit is via the NeuroNLP natural language query interface (Ukani et al., 2019). By specifying in plain English cell types, synaptic distribution, pre- and post-synaptic partners, neurotransmitter types, etc, neurons and synapses can be visualized in the NeuroNLP window (see also Figure 2b).

The motion detection pathway in the fruit fly Medulla has been, in part, mapped out thanks to the Medulla 7 Column dataset (Takemura et al., 2015). While much research has focussed on the direct, feedforward pathway feeding into the motion sensitive T4 neurons (Takemura et al., 2017b; Haag et al., 2017), the contribution of the feedback pathways and the neighboring columnar neurons to the motion detection circuit has largely been ignored. To study the circuit that mediates the lateral feedback into the motion detection pathway, we used English queries to quickly visualize the neurons involved. Starting from a T4a neuron in the ‘home’ column that is sensitive to front-to-back-motion (Maisak et al., 2013 ‘show T4a in column home’; ‘color lime’), we queried its presynaptic neurons (‘add presynaptic neurons’; ‘color gray’) as well as their presynaptic neurons that are non-columnar, in particular, the Dm and Pm cells (Fischbach and Dittrich, 1989) (‘add presynaptic $Dm$ neurons with at least five synapses’; ‘color cyan’; ‘add presynaptic $Pm$ neurons with at least five synapses’; ‘color yellow’). The resulting visualization of the circuit is depicted in Figure 3 (a1), with neurons mediating cross-columnar interaction highlighted. The retrieved connectivity matrix is shown in Figure 3 (a2) (see also Materials and methods, Use Case 1).

Building fly brain circuits with English queries.

(a1) Lateral feedback pathways in the visual motion detection circuit. (green) a T4a neuron, (gray) neurons presynaptic to the T4a neuron, (cyan) glutamatergic and GABAergic Dm neurons that are presynaptic to the neurons in gray, (yellow) Pm neurons that are presynaptic to the neurons in gray. (a2) Connectivity matrix of the pathways in (a1). (b1) Pathways between MBONs and neurons innervating FB layer 3. (yellow) MBONs that are presynaptic to neurons that have outputs in FB layer 3. (green) Neurons that have outputs in FB layer 3 that are postsynaptic to the MBONs in yellow. (red) MBONs postsynaptic to neurons in green. (b2) Connectivity matrix of the pathways in (b1). (c1) The pathways of the g compartment of the larva fruit fly. (cyan) g compartment MBONs, (yellow) KCs presynaptic to the g compartment MBONs, (green) a DAN presynaptic to the g compartment MBONs, (white) an OAN presynaptic to the g compartment MBONs, (c2) Connectivity matrix of the pathways in (c1). (d1) Pathways between LPTCs and a potential translational motion-sensitive neuron GLN. (yellow) LPTCs, (cyan) GLNs, (red) neurons that form the path between LPTCs to GLNs. (d2) Connectivity matrix of the pathways in (d1). color bar in a2, b2, c2, and d2: log10(N+1), where N is the number of synapses between 2 neurons. (a1)–(d1) are screenshots downloaded from the NeuroNLP Window. The sequence of queries that generates these visualizations is listed in Materials and methods Use Case 1.

The mushroom body (MB) has been known to be an associative olfactory memory center (Modi et al., 2020), whereas the fan-shaped body (FB) shown to be involved in visual pattern memory (Liu et al., 2006). Recently, it has been shown that the Kenyon cells in the MB also receive visual inputs (Li et al., 2020a), and that the MB and FB are interconnected (Li et al., 2020b). The pathway between the MB and the FB, or a particular layer in the FB can be easily visualized using NeuroNLP. We used English queries to establish and visualize the circuit that directly connects the MB with the layer 3 of the FB in the Hemibrain dataset, as depicted in Figure 3 (b1). The connectivity matrix is shown in Figure 3 (b2) (see also Materials and methods, Use Case 1, for the sequence of queries that created this visualization).

Natural language queries supplemented by the NeuroNLP 3D interface and the Info Panel (see also Figure 2b) enable us to inspect, add and remove neurons/synapses. For example, in Figure 3 (c1), we built a simple circuit around the g compartment of the mushroom body (Saumweber et al., 2018) of the Larva L1EM dataset (Ohyama et al., 2015) starting from the MBONs that innervate it. We then inspected these MBONs in the Info Panel and added all KCs presynaptic to each of them by filtering the name ‘KC’ in the presynaptic partner list. Similarly, we added the dopaminergic neurons (DANs) and octopaminergic neurons (OANs) presynaptic to these MBONs. Figure 3 (c2) depicts the connectivity matrix of this MB circuit (see also Materials and methods, Use Case 1, for the full sequence of queries/operations that created this visualization).

The FlyBrainLab UI provides users a powerful yet intuitive tool for building fly brain circuits at any scale, requiring no knowledge of programming or the data model of the underlying NeuroArch Database. For more advanced users, FlyBrainLab also exposes the full power of NeuroArch API for directly querying the NeuroArch database using the NeuroArch JSON format. Utilizing this capability, we built a circuit pathway that potentially carries translational visual motion information into the Noduli (NO) in Figure 3 (d1). The search for this circuit was motivated by a type of cells in honey bees, called TN neurons, that are sensitive to translational visual motion and provide inputs to the NO (Stone et al., 2017). In the Hemibrain dataset, a cell type ‘GLN’ resembles the TN neurons in the honey bee and is potentially a homolog in the fruit fly. We therefore asked if there exists a pathway to these neurons from visual output neurons that are sensitive to wide-field motion, in particular, the lobula plate tangential cells (LPTCs). Using a NeuroArch query, we found all paths between LPTCs and GLNs that are less than three hops and have at least five synapses in each hop (see also Materials and methods), Use Case 1, for the complete listing of the invoked NeuroArch JSON query. Only the HS cells and H2 cells, but not CH and VS cells (Hausen, 1984) have robust paths to the GLNs. The connectivity of this circuit is shown in Figure 3(d2) (see also Materials and methods, Use Case 1).

Use Case 2: exploring the structure and function of yet to be discovered brain circuits

Here, we further demonstrate the capabilities of FlyBrainLab in the quest of exploring the structure and function of yet to be discovered fly brain circuits. In particular, we demonstrate several use cases of the Utility Libraries (see Appendix 2) and their interaction with the rest of the FlyBrainLab components.

In the first example, we explore the structure of densely-connected brain circuits in the Hemibrain dataset. Such an exploration is often the starting point in the quest of understanding the function of a brain circuit without any prior knowledge of neuropil boundaries, or the identity of each neuron (see also Materials and methods, Use Case 2). By invoking the NeuroGraph Library on the Hemibrain dataset (see Appendix 2), we extracted eight densely connected neuron groups, as shown in Figure 4a. We then visualized subsets of neurons pseudocolored by group membership as shown in Figure 4b and assigned six of the eight groups to several known brain regions/neuropils. These neuropils include the AL, the MB, the lateral horn (LH), the central complex (CX), the anterior ventrolateral protocerebrum (AVLP), and the superior protocerebrum (SP). The remaining two brain regions correspond to the anterior optic tubercle with additional neurons of the posterior brain (AOTUP) and the anterior central brain (ACB). A circuit diagram depicting the connections between these groups of neurons is shown in Figure 4c.

Exploratory analysis of the fly brain circuits.

(a) Louvain algorithm applied to all neurons in the Hemibrain dataset showing eight groups of densely connected neurons. Color indicates the value of log10(n+1), where n is the number of synapses; values larger than one are shown in the same color as value 1. AOTUP: anterior optic tubercle with additional neurons of the posterior brain, AVLP: anterior ventrolateral protocerebrum, LH: lateral horn, ACB: neurons in the anterior central brain, AL: antennal lobe, SP: superior protocerebrum, CX: central complex, MB: mushroom body. Labels were added after visually inspecting the neurons in each group of neurons in (b). (b) A subset of neurons pseudo-colored according to the group they belong to in (a). (c) A brain-level circuit diagram created by hand according to the grouping of neurons and the inter-group edge information obtained in (a). Visual and olfactory inputs from, respectively, the early visual system (EVS) and antenna (ANT) were added. Groups in the left hemisphere were added by symmetry. (d) Adjacency Spectral Embedding algorithm applied to the VA1v connectome dataset using the NeuroGraph library. The color of each circle indicates the cell-type labeling from the original dataset. Groups of neurons labeled by dashed circles are based on validated cell types. (e) Visualization of neurons analyzed in (d). Neuron colors were assigned according to the groups in (d). (f) A circuit diagram of the VA1v circuit analyzed in (d) automatically generated by the NeuroGraph Library. (g) Connectivity matrix of the lateral horn neurons downstream the V glomerulus projection neurons of the antennal lobe. Colorbar configured in the same way as in (a). (h) Morphology of the neurons in (g). (white) PNs arborizing in the V glomerulus, (red) LHLNs, (cyan) LHONs. (i) A circuit diagram automatically generated by the circuit visualization utilities of NeuroGraph starting with the circuit in (g) and (h), and the superior lateral protocerebrum (SLP), the primary neuropil that the LHONs project to.

In the second example, we sought to define cell types not just by visually inspecting the morphology of the neurons, but also by taking into account the underlying graph structure of the circuit pathways. This is useful when a new dataset is released without explicit definitions of cell types and/or when there is a need for refining such definitions. Here, to automatically analyze neuron cell types in the VA1v glomerulus dataset (Horne et al., 2018), we applied the Adjacency Spectral Embedding algorithm (Sussman et al., 2012) of the NeuroGraph library (see Appendix 2 and Materials and methods, Use Case 2). The embedding is visualized using UMAP (McInnes et al., 2018) and depicted in Figure 4d, and it is validated by annotations from the original dataset. We note that the overlap between PNs and some LNs is due to the restricted volume of the traced tissue. For an additional adjustment of their cell-type assignment, the resulting clusters of neurons can be further visually inspected as shown in Figure 4e. Outliers that lie far away from their clusters may guide future inquiries into cell types that have not been previously described or provide new descriptions for existing cell types contingent on their connectivity. Finding new neuron subtypes, for example, LNs that cluster with OSNs or neurons that cluster with LNs can be further investigated. Finally, a circuit diagram can be automatically generated using the NeuroGraph Library, as shown in Figure 4f.

Lastly, we demonstrate the process of automatic circuit diagram generation of explored brain circuits. Here, we explored the lateral horn subcircuit downstream of the V glomerulus projection neurons, as well as the neuropils that the lateral horn output neurons (LHONs) project to Varela et al., 2019. The circuit can be easily specified and visualized by NeuroNLP queries (see Materials and methods, Use Case 2), and individual neurons can be further added/removed from the GUI. The resulting circuit is depicted in Figure 4h. We then inspected the innervation pattern of each neuron, either visually, or by querying its arborization data from the NeuroArch Database, and classified it either as a lateral horn local neuron (LHLN) or a LHON. The connectivity of neurons of the resulting circuit is shown in Figure 4g, where the rows and columns are grouped by neuron type. Using this collection of information, we invoked the NeuroGraph Library to create the circuit diagram shown in Figure 4i (see also Materials and methods, Use Case 2). The circuit diagram can then be used for computational studies as outlined in the previous examples.

Use Case 3: interactive exploration of executable fruit fly brain circuit models

Beyond exploring the structure of fruit fly brain circuits, a primary objective supported by FlyBrainLab is the study of the function of executable circuits constructed from fly brain data. FlyBrainLab provides users with rapid access to executable circuits stored on the NeuroArch Database. During program execution, these circuits can also be directly accessed by the Neurokernel Execution Engine.

In Figure 5a, we depict the pathways of a cartridge of the Lamina neuropil (Rivera-Alba et al., 2011) visualized in the NeuroNLP window. The circuit diagram modeling the cartridge visualized in the NeuroGFX window is shown in Figure 5b. With proper labels assigned to blocks/lines representing the neurons and synapses, we made the circuit diagram interactive. For example, by clicking on the block representing a neuron, the neuron can be inactivated, an operation corresponding to the silencing/ablating a neuron in the fly brain. Figure 5d depicts a modified cartridge circuit in which several neurons have been silenced. As a result, the visualized neural pathways in the NeuroNLP window automatically reflect these changes, as shown in Figure 5c. The circuit components can also be disabled/reenabled by selecting through hiding/displaying visualized neurons in the NeuroNLP window.

Interactive exploration of executable circuit models.

(a) The pathways of a Lamina cartridge visualized in the NeuroNLP window. (b) A circuit diagram of the cartridge in (a) displayed in NeuroGFX window. (c) The cartridge pathways modified interactively using the circuit diagram in (b) that results in the circuit diagram in (d). (d) The circuit diagram modeling the chosen pathways in (c).

In the same interactive diagram, models of the circuit components and their parameters can be viewed/specified from a Model Library with all the available model implementations in the Neurokernel Execution Engine. In addition to these simple interactive operations further detailed in Materials and methods, Use Case 3, FlyBrainLab APIs support bulk operations such as updating models and parameters of an arbitrary number of circuit components (see also Appendix 4).

Use Case 4: analyzing, evaluating, and comparing circuit models of the fruit fly central complex

We first demonstrate the workflow supported by FlyBrainLab for analyzing, evaluating and comparing circuit models of the fruit fly Central Complex (CX) based on the FlyCircuit dataset (Chiang et al., 2011). The circuit connecting the ellipsoid body (EB) and the protocerebral bridge (PB) in the CX has been shown to exhibit ring attractor dynamics (Seelig and Jayaraman, 2015; Kim et al., 2017; Skaggs et al., 1995). Recently, a number of researchers investigated circuit mechanisms underlying these dynamics. Here, we developed a CXcircuits Library for analyzing, evaluating and comparing various CX circuit realizations. Specifically, we implemented three of the circuit models published in the literature, called here model A (Givon et al., 2017), model B (Kakaria and de Bivort, 2017), and model C (Su et al., 2017), and compared them in the same FlyBrainLab programming environment.

In Figure 6 (a1, b1, c1), the anatomy of the neuronal circuits considered in models A, B, and C is depicted, respectively. The corresponding interactive circuit diagram is shown in Figure 6 (a2, b2, c2). Here, model A provides the most complete interactive CX circuit, including the core subcircuits for characterizing the PB-EB interaction with the EB-LAL-PB, PB-EB-LAL, PB-EB-NO, PB local, and EB ring neurons (see Materials and methods, Use Case 4, and Givon et al., 2017 for commonly used synonyms). Models B and C exhibit different subsets of the core PB-EB interaction circuit in model A. While no ring neurons are modeled in model B, PB local neurons are omitted in model C. They, however, do not model other neurons in the CX, for example, those that innervate the Fan-shaped Body (FB).

Analysis, evaluation and comparison of three models of CX published in the literature.

(a1–a4) Model A (Givon et al., 2017), (b1–b4) Model B (Kakaria and de Bivort, 2017), (c1–c4) Model C (Su et al., 2017). (a1, b1, c1) Morphology of the neurons visualized in the NeuroNLP window (see Figure 2b). Displayed number of neurons in: (a1) 366, (a2) 87, (a3) 54. (a2, b2, c2) Neuronal circuits in the NeuroNLP window depicted in the NeuroGFX window (see Figure 2b) as abstract interactive circuit diagrams. The naming of the ring neurons in (c2) follows Su et al., 2017. Number of neurons in the diagram: (b1) 348, (b2) 60, (b3) 56. As the FlyCircuit dataset contains duplicates, some neurons in the diagrams may correspond to multiple neurons in the dataset and some do not have correspondences due to the lack of morphology data. (a3, b3, c3) When a single vertical bar is presented in the visual field (d1/d2), different sets of neurons/subregions (highlighted) in each of the models, respectively, receive either current injections or external spike inputs. (a4, b4, c4) The mean firing rates of the EB-LAL-PB neurons innervating each of the EB wedges of the three models (see Materials and methods, Use Case 4), in response to the stimulus shown in (d3). Insets show the rates at 10, 20, and 30 s, respectively, overlaid onto the EB ring. (d1) A schematic of the visual field surrounding the fly. (d2) The visual field flattened. (d3) Input stimulus consisting of a bar moving back and forth across the screen, and a second fixed bar at 60 and with lower brightness.

In Video 1, we demonstrate the interactive capabilities of the three models side-by-side, including the visualization of the morphology of CX neurons and the corresponding executable circuits, user interaction with the circuit diagram revealing connectivity pattern, and the execution of the circuit. In the video, the visual stimulus depicted in Figure 6 (d3) was presented to three models (see Materials and methods, Use Case 4, for the details of generating the input stimulus for each model). The responses, measured as the mean firing rate of EB-LAL-PB neurons within contiguous EB wedges, are shown in Figure 6 (a4, b4, c4), respectively. Insets depict the responses at 10, 20, and 30 s. During the first second, a moving bar in its fixed initial position and a static bar are presented. The moving bar displays a higher brightness than the static bar. All three models exhibited a single-bump (slightly delayed) response tracking the position of the moving bar. The widths of the bumps were different, however. After 30 s, the moving bar disappeared and models A and B shifted to track the location of the static bar, whereas the bump in model C persisted in the same position where the moving bar disappeared. Furthermore, for models B and C but not for model A, the bumps persisted after the removal of the visual stimulus (after 33 s), as previously observed in vivo (Seelig and Jayaraman, 2015; Kim et al., 2017).

Video 1
Running three CX executable circuits in the FlyBrainLab.

(left) Model A (Figure 6a). (middle) Model B (Figure 6b). (right) Model C (Figure 6c).

By comparing these circuit models, we notice that, to achieve the ring attractor dynamics, it is critical to include global inhibitory neurons, for example, PB local neurons in models A and B, and ring neurons in models A and C. The model A ring neurons featuring a different receptive field and the ring neurons in model C receiving spike train input play a similar functional role. However, to achieve the ring attractor dynamics characterized by a single bump response to multiple bars and persistent bump activity after the removal of the vertical bar, model C only required three out of the five core neuron types (see Materials and methods, Use Case 4), whereas model B requires all four neuron types included.

Use Case 5: analyzing, evaluating, and comparing adult antenna and antennal lobe circuit models based upon the FlyCircuit and hemibrain datasets

In the second example, we demonstrate the effect on modeling the antenna and antennal lobe circuits due to, respectively, the FlyCircuit (Chiang et al., 2011) and the Hemibrain (Scheffer et al., 2020) datasets (see also Materials and methods, Use Case 5).

We start by exploring and analyzing the morphology and connectome of the olfactory sensory neurons (OSNs), antennal lobe projection neurons (PNs), and local neurons (LNs) of the FlyCircuit (Chiang et al., 2011) and the Hemibrain (Scheffer et al., 2020) datasets (see Figure 7a). Compared with the antennal lobe data in the FlyCircuit dataset, the Hemibrain dataset reveals additional connectivity details between OSNs, PNs, and LNs that we took into account when modeling the antennal lobe circuit (see Materials and methods, Use Case 5). Following (Lazar et al., 2020a), we first constructed the two layer circuit based on the FlyCircuit dataset shown in Figure 7b (left) and then constructed a more extensive connectome/synaptome model of the adult antennal lobe based on the Hemibrain dataset as shown in Figure 7b (right).

Analysis, evaluation, and comparison between two models of the antenna and antennal lobe circuit of the adult fly based on the FlyCircuit (left) dataset (Chiang et al., 2011) and an exploratory model based on the Hemibrain (right) dataset (Scheffer et al., 2020).

(a) Morphology of olfactory sensory neurons, local neurons, and projection neurons in the antennal lobe for the two datasets. The axons of the projection neurons and their projections to the mushroom body and lateral horn are also visible. (b) Circuit diagrams depicting the antenna and antennal lobe circuit motifs derived from the two datasets. (c) Response of the antenna/antennal lobe circuit to a constant ammonium hydroxide step input applied between 1 s and 3 s of a 5 s simulation; (left) the interaction between the odorant and 23 olfactory receptors is captured as the vector of affinity values; (middle and right) a heatmap of the uniglomerular PN PSTH values (spikes/second) grouped by glomerulus for the two circuit models. (d) The PN response transients of the two circuit models for uniform noise input with a minimum of 0ppm and a maximum of 100 ppm preprocessed with a 30 Hz low-pass filter (Kim et al., 2011) and delivered between 1 s and 3 s.

Execution of and comparison of the results of these two circuit models show quantitatively different PN output activity in steady-state (Figure 7c) and for transients (Figure 7d). A prediction (Lazar and Yeh, 2019; Lazar et al., 2020a) made by the antenna and antennal lobe circuit shown in Figure 7b (left) using the FlyCircuit data has been that the PN activity, bundled according to the source glomerulus, is proportional to the vector characterizing the affinity of the odorant-receptor pairs (Figure 7c, left column).

The transient and the steady state activity response are further highlighted in Figure 7d for different amplitudes of the odorant stimulus waveforms. The initial results show that the circuit on the right detects with added emphasis the beginning and the end of the odorant waveforms.

The complex connectivity between OSNs, LNs, and PNs revealed by the Hemibrain dataset suggests that the adult antennal lobe circuit encodes additional odorant representation features (Scheffer et al., 2020).

Use Case 6: analyzing, evaluating, and comparing early olfactory circuit models of the larva and the adult fruit flies

In the third example, we investigate the difference in odorant encoding and processing in the Drosophila Early Olfactory System (EOS) at two different developmental stages, the adult and larva (see also Materials and methods, Use Case 6).

We start by exploring and analyzing the morphology and connectome for the Olfactory Sensory Neurons (OSNs), Antennal Lobe Projection Neurons (PNs) and Local Neurons (LNs) of the adult Hemibrain (Scheffer et al., 2020) dataset and the LarvaEM (Berck et al., 2016) dataset (see Figure 8a).

Evaluation and Comparison of two Drosophila Early Olfactory System (EOS) models describing adult (left, developed based on Hemibrain dataset) and larval (right, developed based on LarvaEM dataset) circuits.

(a) Morphology of Olfactory Sensory Neurons (OSNs) in the Antenna (ANT), Local Neurons (LNs) in the Antennal Lobe (AL) and Projection Neurons in the AL. (b) Circuit diagrams depicting the Antenna and Antennal Lobe circuit motifs. (c) (left) Interaction between 13 odorants and 37 odorant receptors (ORs) characterized by affinity values. The ORs expressed only in the adult fruit flies are grouped in the top panel; the ones that are expressed in both the adult and the larva are grouped in the middle panel; and those expressed only in the larva are shown in the bottom panel. Steady-state outputs of the EOS models to a step concentration waveform of 100 ppm are used to characterize combinatorial codes of odorant identities at the OSN level (middle) and the PN level (right).

Detailed connectivity data informed the construction of the model for both the adult and larva EOS, that we developed based on parameterized versions of the previous literature (Lazar et al., 2020a). In particular, the larval model includes fewer number of OSNs, PNs, and LNs in Antenna and Antennal Lobe circuit as shown in Figure 8b right.

The adult and larval EOS models were simultaneously evaluated on a collection of mono-molecular odorants whose binding affinities to odorant receptors have been estimated from physiological recordings (see also Materials and methods Use Case 6). In Figure 8c (left), the affinity values are shown for the odorant receptors that are only in the adult fruit fly (top panel), that appear in both the adult and the larva (middle panel) and, finally, that are only in the larva. The steady-state responses of the Antenna and Antennal Lobe circuit for both models are computed and shown in Figure 8c (middle and right, respectively). Visualized in juxtaposition alongside the corresponding affinity vectors, we observe a stark contrast in odorant representation at all layers of the circuit between adult and larva, raising the question of how downstream circuits can process differently represented odorant identities and instruct similar olfactory behavior across development. Settling such questions requires additional physiological recordings, that may improve the accuracy of the current FlyBrainLab EOS circuit models.

Discussion

Historically, a large number of visualization and computational tools have been developed primarily designed for either neurobiological studies (see Figure 1 (left)) or computational studies (see Figure 1 (right)). These are briefly discussed below.

The computational neuroscience community has invested a significant amount of effort in developing tools for analyzing and evaluating model neural circuits. A number of simulation engines have been developed, including general simulators such as NEURON (Hines and Carnevale, 1997), NEST (Gewaltig and Diesmann, 2007), Brian (Stimberg et al., 2019), Nengo (Bekolay et al., 2014), Neurokernel (Givon and Lazar, 2016), DynaSim (Sherfey et al., 2018), and the ones that specialize in multi-scale simulation, for example MOOSE (Ray and Bhalla, 2008), in compartmental models, for example ARBOR (Akar et al., 2019), and in fMRI-scale simulation for example The Virtual Brain (Sanz Leon et al., 2013; Melozzi et al., 2017). Other tools improve the accessibility to these simulators by (i) facilitating the creation of large-scale neural networks, for example BMTK (Dai et al., 2020a) and NetPyNE (Dura-Bernal et al., 2019), and by (ii) providing a common interface, simplifying the simulation workflow and streamlining parallelization of simulation, for example PyNN (Davison et al., 2008), Arachne (Aleksin et al., 2017), and NeuroManager (Stockton and Santamaria, 2015). To facilitate access and exchange of neurobiological data worldwide, a number of model specification standards have been worked upon in parallel including MorphML (Crook et al., 2007), NeuroML (Gleeson et al., 2010), SpineML (Tomkins et al., 2016), and SONATA (Dai et al., 2020b).

Even with the help of these computational tools, it still takes a substantial amount of manual effort to build executable circuits from real data provided, for example, by model databases such as ModelDB/NeuronDB (Hines et al., 2004) and NeuroArch (Givon et al., 2015). Moreover, with the ever expanding size of the fruit fly brain datasets, it has become more difficult to meet the demand of creating executable circuits that can be evaluated with different datasets. In addition, with very few exceptions, comparisons of circuit models, a standard process in the computer science community, are rarely available in the computational neuroscience literature.

Substantial efforts by the system neuroscience community went into developing tools for visualizing the anatomy of the brain. A number of tools have been developed to provide interactive, web-based interfaces for exploring, visualizing and analyzing fruit fly brain and ventral nerve cord datasets, for both the adult (Chiang et al., 2011; Scheffer et al., 2020) and the larva (Ohyama et al., 2015). These include the FlyCircuit (Chiang et al., 2011), the Fruit Fly Brain Observatory (FFBO/NeuroNLP) (Ukani et al., 2019), Virtual Fly Brain (Milyaev et al., 2012), neuPrintExplorer (Clements et al., 2020), FlyWire (Dorkenwald et al., 2020), and CATMAID (Saalfeld et al., 2009). Similar tools have been developed for other model organisms, such as the Allen Mouse Brain Connectivity Atlas (Oh et al., 2014), the WormAtlas for C. elegans (https://www.wormatlas.org) and the Z Brain for zebra fish (Randlett et al., 2015). A number of projects, for example (Bates et al., 2020), offer a more specialized capability for visualizing and analyzing neuroanatomy data.

While these tools have significantly improved the access to and exploration of brain data a number of recent efforts started to bridge the gap between neurobiological data and computational modeling including the Geppetto (Cantarelli et al., 2018), the OpenWorm (Szigeti et al., 2014) and the Open Source Brain (Gleeson et al., 2019) initiatives and the Brain Simulation Platform of the Human Brain Project (Einevoll et al., 2019). However, without information linking circuit activity/computation to the structure of the underlying neuronal circuits, understanding the function of brain circuits remains elusive. Lacking a systematic method of automating the process of creating and exploring the function of executable circuits at the brain or system scale levels hinders the application of these tools when composing more complex circuits. Furthermore, these tools fall short of offering the capability of generating static circuit diagrams, let alone interactive ones. The experience of VLSI design, analysis, and evaluation of computer circuits might be instructive here. An electronic circuit engineer reads a circuit diagram of a chip, rather than the 3D structure of the tape-out, to understand its function, although the latter ultimately realizes it. Similarly, visualization of a biological circuit alone, while powerful and intuitive for building a neural circuit, provides little insights into the function of the circuit. While simulations can be done without a circuit diagram, understanding how an executable circuit leads to its function remains elusive.

The tools discussed above all fall short of offering an integrated infrastructure that can effectively leverage the ever expanding neuroanatomy, genetic and neurophysiology data for creating and exploring executable fly brain circuits. Creating circuit simulations from visualized data remains a major challenge and requires extraordinary effort in practice as amply demonstrated by the Allen Brain Observatory (de Vries et al., 2020). The need to accelerate the pace of discovery of the functional logic of the brain of model organisms has entered a center stage in brain research.

FlyBrainLab is uniquely positioned to accelerate the discovery of the functional logic of the Drosophila brain. Its interactive architecture seamlessly integrates and brings together computational models with neuroanatomical, neurogenetic, and neurophysiological data, changing the organization of fruit fly brain data from a group of independently created datasets, arrays, and tables, into a well-structured data and executable circuit repository, with a simple API for accessing data in different datasets. Current data integration extensively focuses on connectomics/synaptomics datasets that, as demonstrated, strongly inform the construction of executable circuit models. We will continue to expand the capabilities of the NeuroArch database with genetic Gal4 lines (https://gene.neuronlp.fruitflybrain.org) and neurophysiology recordings including our own (http://antenna.neuronlp.fruitflybrain.org/). How to construct executable models of brain circuits using genetic and neurophysiology data sets is not the object of this publication and will be discussed elsewhere. Pointers to our initial work are given below.

As detailed here, the FlyBrainLab UI supports a highly intuitive and automated workflow that streamlines the 3D exploration and visualization of fly brain circuits, and the interactive exploration of the functional logic of executable circuits created directly from the analyzed fly brain data. In conjunction with the capability of visually constructing circuits, speeding up the process of creating interactive executable circuit diagrams can substantially reduce the exploratory development cycle.

The FlyBrainLab Utility and Circuit Libraries accelerate the creation of models of executable circuits. The Utility Libraries (detailed in the Appendix 2) help untangle the graph structure of neural circuits from raw connectome and synaptome data. The Circuit Libraries (detailed in the Appendix 3) facilitate the exploration of neural circuits of the neuropils of the central complex and, the development and implementation of models of the adult and larva fruit fly early olfactory system.

Importantly, to transcend the limitations of the connectome, FlyBrainLab is providing Circuit Libraries for molecular transduction in sensory coding (detailed in the Appendix 3), including models of sensory transduction and neuron activity data (Lazar et al., 2015aLazar et al., 2015b; Lazar and Yeh, 2020). These libraries serve as entry points for discovery of circuit function in the sensory systems of the fruit fly (Lazar and Yeh, 2019; Lazar et al., 2020a). They also enable the biological validation of developed executable circuits within the same platform.

The modular software architecture underlying FlyBrainLab provides substantial flexibility and scalability for the study of the larva and adult fruit fly brain. As more data becomes available, we envision that the entire central nervous system of the fruit fly can be readily explored with FlyBrainLab. Furthermore, the core of the software and the workflow enabled by the FlyBrainLab for accelerating discovery of Drosophila brain functions can be adapted in the near term to other model organisms including the zebrafish and bee.

Materials and methods

The FlyBrainLab interactive computing platform tightly integrates tools enabling the morphological visualization and exploration of large connectomics/synaptomics datasets, interactive circuit construction and visualization and multi-GPU execution of neural circuit models for in silico experimentation. The tight integration is achieved with a comprehensive open software architecture and libraries to aid data analysis, creation of executable circuits and exploration of their functional logic.

Architecture of FlyBrainLab

Request a detailed protocol

FlyBrainLab exhibits a highly extensible, modularized architecture consisting of a number of interconnected server-side and user-side components (see Appendix 1—figure 1) including the NeuroArch Database, the Neurokernel Execution Engine and the NeuroMinerva front-end. The architecture of FlyBrainLab and the associated components are described in detail in Appendix 1.

FlyBrainLab Utilities Libraries

Request a detailed protocol

FlyBrainLab offers a number of utility libraries to untangle the graph structure of neural circuits from raw connectome and synaptome data. These libraries provide a large number of tools including high level connectivity queries and analysis, algorithms for discovery of connectivity patterns, circuit visualization in 2D or 3D and morphometric measurements of neurons. These utility libraries are described in detail in Appendix 2.

FlyBrainLab Circuit Libraries

Request a detailed protocol

FlyBrainLab provides a number of libraries for analysis, evaluation and comparison of fruit fly brain circuits. The initial release of FlyBrainLab offers libraries for exploring neuronal circuits of the central complex, early olfactory system, and implementations of olfactory and visual transduction models. These circuit libraries are described in detail in Appendix 3.

Loading publicly available datasets into NeuroArch Database

Request a detailed protocol

All datasets are loaded into the NeuroArch database (Givon et al., 2015Givon et al., 2014) using the NeuroArch API (https://github.com/fruitflybrain/neuroarch).

For the FlyCircuit dataset (Chiang et al., 2011) version 1.2, all 22,828 female Drosophila neurons were loaded, including their morphology, putative neurotransmitter type, and other available metadata. The original name of the neurons was used. These names also serve as the ‘referenceId’ pointing to the record in the original dataset. Connectivity between neurons was inferred according to Huang et al., 2018 and loaded as a different, inferred class of synapses, totaling 4,538,280 connections between pairs of neurons. The metadata was provided by the authors (Huang et al., 2018).

For the Hemibrain dataset (Scheffer et al., 2020), release 1.0.1. Attributes of the neurons, synapses and connections were obtained from the Neuprint database dump available at (https://storage.cloud.google.com/hemibrain-release/neuprint/hemibrain_v1.0.1_neo4j_inputs.zip). The neuropil boundary mesh and neuron morphology were obtained by invoking the neuprint-python API (Clements et al., 2020) of the database server publicly hosted by the original dataset provider. The former was post-processed to simplify the mesh object in MeshLab (https://www.meshlab.net) using quadric edge collapse decimation with a percentage of 0.05. All coordinates were scaled by 0.008 to a [µm] unit. It included a total of 24,770 neurons that were designated in the Neuprint database as ‘Traced’, ‘Roughly Traced’, as well as the neurons that were assigned a name or a cell type. Cell type and neuron name follow the ‘type’ and ‘instance’ attributes, respectively, in the original dataset. To create a unique name for each neuron, neurons with the same instance names were padded with a sequential number. The BodyIDs of neurons in the original dataset use the ‘referenceId’. A total of 3,604,708 connections between pairs of neurons were loaded, and included the positions of 14,318,675 synapses.

At the time of publication, the Hemibrain dataset release 1.2 (https://storage.cloud.google.com/hemibrain-release/neuprint/hemibrain_v1.2_neo4j_inputs.zip) was also loaded into the NeuroArch Database. It included a total of 25,842 neurons, 3,817,700 connections between pairs of these neurons and the positions of 15,337,617 synapses.

For the Larva L1EM dataset (Ohyama et al., 2015), a total of 1,051 neurons characterized by their morphology and metadata were loaded from the publicly served database server at https://l1em.catmaid.virtualflybrain.org. The IDs of neurons in the original dataset were used as ‘referenceId’. A total of 30,350 connections between pairs of neurons were loaded, including the position of 121,112 synapses. All coordinates were scaled by 0.001 to a [μm] unit.

For the Medulla 7 Column dataset (Takemura et al., 2015), the attributes of the neurons, synapses and connections were obtained from the Neuprint database server export available at https://storage.cloud.google.com/hemibrain-release/neuprint/fib25_neo4j_inputs.zip. Neuron morphology was obtained from https://github.com/janelia-flyem/ConnectomeHackathon2015 commit 81e94a9. Neurons without a morphology were omitted during loading. The rest of the procedure is the same as for loading the Hemibrain dataset. A total of 2365 neurons, 42,279 connections between pairs of neurons, and the positions of 130,203 synapses were loaded. Neurotransmitter data was obtained from the Gene Expression Omnibus accession GSE116969 of the transcriptome study published in Davis et al., 2020.

Extra annotations were avoided as much as possible when loading these datasets to the NeuroArch database for public download. If any, they were used to comply with the NeuroArch data model. The complete loading scripts are available at https://github.com/FlyBrainLab/datasets.

Use Case 1: building fly brain circuits with English queries

Request a detailed protocol

The circuit in Figure 3 (a1) was built using the Medulla 7 Column dataset. The following English queries were used to construct the circuit: (1) ‘show T4a in column home’, (2) ‘color lime’, (3) ‘add presynaptic neurons’, (4) ‘color gray’, (5) ‘add presynaptic $Dm$ neurons with more than five synapses’, (6) ‘color cyan’, (7) ‘add presynaptic $Pm$ neurons with more than five synapses’, (8) ‘color yellow’, (9) ‘pin T4a in column home’, (10) ‘pin $Dm$', (11) ‘pin $Pm$’.

The circuit in Figure 3 (b1) was built using the Hemibrain dataset release 1.2. The following English queries were used to construct the circuit: (1) ‘show MBON presynaptic to neurons that has outputs in FB layer 3 with at least 10 synapses’, (2) ‘color mbon yellow’, (3) ‘add postsynaptic neurons with at least 10 synapses that has output in FB layer 3’, (4) ‘color forest green’, (5) ‘add mbon postsynaptic to neurons that have input in FB layer 3 with at least 10 synapses’, (6) ‘color red’.

The circuit in Figure 3 (c1) was built using the Larva L1EM dataset. We first query the neuron MBON that innervate the g compartment by ‘show $MBON-g$ in right mb’. The Information Panel in the FlyBrainLab UI provides a list of presynaptic partners and a list of postsynaptic partners of the neuron selected. After filtering the list by name and by the number of synapses, each neuron and the synapses to/from the neuron can be added to the NeuroNLP window for visualization. Finally, the collection of all filtered results can be added to the NeuroNLP window for visualization by clicking a single button. The circuit in Figure 3 (c1) was constructed by leveraging this capability.

The circuit in Figure 3 (d1) was built using the Hemibrain dataset release 1.2. First, the LPTCs and GLNs in the right hemisphere were added with the NLP queries ‘show LPTC’ and ‘add /rGLN(.*)R(.*)/r’. Second, to obtain the pathway between the two neuron types, the following query was invoked:

# query for LPTC
res1 = client.executeNLPquery("show LPTC")
# color the neurons in the previous query
_ = client.executeNLPquery('color orange')
# query for GLN on the right hemisphere using regular expression 
res2 = client.executeNLPquery("add /rGLN(.*)R(.*)/r")
# color the neurons in the previous query
_ = client.executeNLPquery("color cyan") 
# get the unique names of the GLNs
gln = [v['uname'] for v in res2.neurons.values()]
# query using NeuroArch JSON format 
task = {"query": [
          {"action": {"method": {"path_to_neurons": {
              "pass_through": gln,
              "max_hops": 2,
              "synapse_threshold": 5
          }}},
          "object": {"state": 1}}],
        "format": "morphology",
        "verb": "add"
}
res3 = client.executeNAquery(task)
_ = client.executeNLPquery("color red")

After building up the circuit in the NeuroNLP window, the connectivity matrices of the four circuits were retrieved using the ‘get_neuron_adjacency_matrix’ method.

Use Case 2: exploring the structure and function of yet to be discovered brain circuits

Request a detailed protocol

To investigate the overall brain level structure from Hemibrain neurons (Figure 4a–c), NeuroArch Database was queried for all neurons in the Hemibrain dataset and connectivity information (in the form of a directed graph) was extracted using the FlyBrainLab Client API (see Appendix 1.2). The Louvain algorithm (Blondel et al., 2008) of the NeuroGraph Library (see Appendix 2) was used to detect the structure of the graph. Apart from the connectivity of the neurons, any annotation of the neurons was excluded from the analysis. A random subset of neurons from each densely connected group are visualized and colored in the NeuroNLP. Group names are assigned by visually inspecting the results displayed by the NeuroNLP window and known neuropil structure of the fly brain. The circuit diagram was created by hand according to the groups and inter-group connections.

To define cell types in the VA1v glomerulus connectome dataset (Figure 4d–f), NeuroArch Database was queried for all neurons in the VA1v dataset and connectivity information (in the form of a directed graph) was extracted by using the FlyBrainLab Client API (see Appendix 1.2). The Adjacency Spectral Embedding algorithm (Sussman et al., 2012) of the NeuroGraph library (see Appendix 2) was applied to calculate embeddings via the GMMASE approach (Priebe et al., 2017). Annotations of the identity of the neurons, if any, were not used in this step of the analysis. To check the quality of the embeddings result, human-annotated data from the original dataset was used to color the neurons according to their cell types. Neurons from each cell type were visualized and colored by NeuroNLP commands. The circuit diagram was generated using the NeuroGraph Library. Connections between neurons were established only if more than 10 synapses from a presynaptic neuron to a postsynaptic neuron could be identified. Coloring of the cell types matches the NeuroNLP commands.

To investigate the downstream neurons of V glomerulus projection neurons of the AL (Figure 4g–i), the latter neurons and their postsynaptic partners with more than 10 synapses were visualized with NeuroNLP queries. Arborization data of each neuron was queried to determine whether it is a local neuron or an output neuron of the lateral horn. NeuroGraph Library was used to generate the circuit diagram. The circuit diagram generation is based on the GraphViz library (Ellson et al., 2001), and additional information such as group names were used for arranging the diagram.

Use Case 3: interactive exploration of executable fruit fly brain circuit models

Request a detailed protocol

Connectome data published by Rivera-Alba et al., 2011 was uploaded into the NeuroArch Database, including the six photoreceptors, eight neurons, and six neurites of multiple amacrine cells that innervate the cartridge. For simplicity, each of the amacrine cell neurites was considered as a separate neuron. The traced neuron data were obtained from authors of Rivera-Alba et al., 2011 and subsequently converted into neuron skeletons in SWC format. The connectivity between these neurons was provided in Rivera-Alba et al., 2011 as supplementary information.

Loading data into NeuroArch Database was achieved with the FlyBrainLab 'query' module. The 'query' module provides a mirror of the functionality available in the high-level NeuroArch API. The pathway was then explored in the NeuroNLP window and the connectivity matrix extracted by FlyBrainLab Client API call, as described above.

The circuit diagram in Figure 5 was created manually using Inkscape. All blocks representing the neurons were added with attributes that had the neuron’s unique name in the database as a label value, and added with '.neuron’ class designation. Similarly, all synapses were added with '.synapse' class designation. The diagram was made interactive by loading a javascript file available in the NeuGFX library. The runtime interaction is controlled by the circuit module of the FlyBrainLab Client API.

Appendix 4 provides a walk through of the process of creating and operating an interactive circuit diagram of a Lamina cartridge circuit starting from raw connectomic and synaptomic data (Rivera-Alba et al., 2011). Some of the core FlyBrainLab capabilities (see also Appendix 1.2) are also highlighted. The walk through is accompanied by a Jupyter notebook available at https://github.com/FlyBrainLab/Tutorials/tree/master/tutorials/cartridge/Cartridge.ipynb.

In what follows, the usage of FlyBrainlab in analyzing, evaluating and comparing more complex circuit models is demonstrated. For brevity, Jupyter notebooks are only provided on Github repositories disseminated at https://github.com/FlyBrainLab/FlyBrainLab/wiki/FlyBrainLab-Resources.

Use Case 4: analyzing, evaluating, and comparing circuit models of the fruit fly central complex

Request a detailed protocol

Model A (Givon et al., 2017; Figure 6a), Model B (Kakaria and de Bivort, 2017; Figure 6b) and Model C (Su et al., 2017; Figure 6c) were implemented using the CXcircuits Library (see also Appendix 3). A wild-type fruit fly CX circuit diagram based on model A was created in the SVG format and made interactive in the NeuroGFX window. The neurons modeled in the circuit diagram were obtained by querying all the neurons of the CX neuropils in the FlyCircuit dataset. The innervation pattern of each neuron was obtained from Lin et al., 2013 and visually examined in the NeuroNLP window. Based on the assumptions made for each model, a standard name was assigned to the model of the neuron according to the naming scheme adopted in the CXcircuits Library. The neurons with missing morphologies in the FlyCircuit dataset were augmented with the data available in the literature (Wolff et al., 2015; Lin et al., 2013).

This all encompassing circuit diagram was then adapted to the other models. Since the assumptions about the subregions that neurons receive inputs from and output to are different in each circuit model, slightly different names may be assigned to neurons in different circuit models. The complete list of modeled neurons of the three models are provided in Supplementary file 1 ‘CX_models.xlsx’, along with their corresponding neurons in the FlyCircuit dataset. The CXcircuits Library also uses this list to enable synchronization of the neurons visualized in the NeuroNLP window with the neurons represented in the NeuroGFX window.

All three models include the same core subcircuits for modeling the Protocerebral Bridge - Ellipsoid Body (PB-EB) interaction. The core subcircuits include three cell types, namely, the PB-EB-LAL, PB-EB-NO, and EB-LAL-PB neurons (NO - Noduli, LAL - Lateral Accessory Lobe, see also Givon et al., 2017 for a list of synonyms of each neuron class). These cells innervate three neuropils, either PB, EB and LAL or PB, EB, and NO. Note that only synapses within PB and EB are considered. For model A, this is achieved by removing all neurons that do not belong to the core PB-EB circuit. This can be directly performed on the circuit diagram in the NeuroGFX window or by using the CXcircuits API. Model B includes an additional cell type, the PB local neurons that introduce global inhibition to the PB-EB circuit. Model C does not include PB local neurons, but models 3 types of ring neurons that innervate the EB. Both PB local neurons and ring neurons are present in model A. However, except for their receptive fields, all ring neurons in model A are the same (see below). Figure 9 depicts the correspondence established between the morphology of example neurons and their respective representation in the overall circuit diagram.

The correspondence between the morphology and the circuit diagram representation of 5 classes of neurons that determine the PB-EB interaction.

(a1, a2) EB-LAL-PB neuron and its wiring in the circuit diagram. (b1, b2) PB-EB-LAL neuron and its wiring in the circuit diagram. (c1, c2) PB-EB-NO neuron and its wiring in the circuit diagram. (d1, d2) PB local neuron and its wiring in the circuit diagram. (e1, e2) Ring neuron and its wiring in the circuit diagram.

In Model C, the subcircuit consisting of the PB-EB-LAL and EB-LAL-PB neurons was claimed to give rise to the persistent bump activity while the interconnect between PB-EB-NO and EB-LAL-PB allowed the bump to shift in darkness. To better compare with the other two models that did not model the shift in darkness, PB-EB-NO neurons were interactively disabled from the diagram.

For a single vertical bar presented to the fly at the position shown in Figure 6 (d1) (see also the flattened visual input in Figure 6 (d2), the PB glomeruli or the EB wedges innervated by neurons of each of the three circuit models that receive injected current or external spike inputs are, respectively, highlighted in Figure 6 (a3, b3, c3). The CXcircuit Library generates a set of visual stimuli and computes the spike train and/or the injected current inputs to each of the three models.

For model A (Givon et al., 2017), each PB glomerulus is endowed with a rectangular receptive field that covers 20° in azimuth and the entire elevation. Together, the receptive fields of all PB glomeruli tile the 360° azimuth. All PB neurons with dendrites in a glomerulus, including the PB-EB-LAL and PB-EB-NO neurons, receive the visual stimulus filtered by the receptive field as an injected current. Additionally, each Bulb (BU) microglomerulus is endowed with a Gaussian receptive field with a standard deviation of 9° in both azimuth and elevation. The ring neuron innervating a microglomerulus receives the filtered visual stimulus as an injected current (see also the arrows in Figure 6 (a3)). Neuron dynamics follow the Leaky Integrate-and-Fire (LIF) neuron model

(1) CidVidt=-Vi-V0iRi+Ii,

where Vi is the membrane voltage of the i th neuron, Ci is the membrane capacitance, V0i is the resting potential, Ri is the resistance, and Ii is the synaptic current. Upon reaching the threshold voltage Vthi, each neuron’s, membrane voltage is reset to Vri. Synapses are modeled as α-synapses with dynamics given by the differential equations

(2) gi(t)=g¯isi(t)dsidt(t)=hi(t)1[t0](t)dhidt(t)=(ari+adi)h(t)ariadisi+ariadikδ(ttki),

where g¯i is a scaling factor, ari and adi are, respectively, the rise and decay time of the synapse, 1[t0](t) is the Heaviside function and δ(t) is the Dirac function. δ(t-tki) indicates an input spike to the i th synapse at time tki.

For Model B (Kakaria and de Bivort, 2017), the receptive field of each of the 16 EB wedges covers 22.5° in azimuth. All EB-LAL-PB neurons that innervate a wedge receive a spike train input whose rate is proportional to the filtered visual stimulus (see also arrow in Figure 6 (b3)). The maximum input spike rate is 120 Hz when the visual stimulus is a bar of width 20° at maximum brightness 1. A 5 Hz background firing is always added even in darkness. Neurons are modeled as LIF with a refractory period of 2.2 ms as suggested in Kakaria and de Bivort, 2017. For synapses, instead of using the postsynaptic current (PSC)-based model described in Kakaria and de Bivort, 2017, the α-synapse described above was used and its parameters were chosen such that the time-to-peak and peak value approximately matched that of the PSC-based synapse.

For Model C (Su et al., 2017), the receptive field of each of the 16 EB wedges covers 22.5° in azimuth. Two PB-EB-LAL neurons project to a wedge each from a different PB glomerulus. Input to the Model C circuit is presented to pairs of PB glomeruli (see also arrows in Figure 6 (c3)), and all neurons with dendrites in these two PB glomeruli receive a spike train input at a rate proportional to the filtered visual stimuli, with a maximum 50 Hz when the bar is at maximum brightness 1. Neurons are modeled as LIF neurons with a refractory period of 2 ms (as suggested in Su et al., 2017). Synapses are either modeled by the AMPA/GABAA receptor dynamics as

(3) gi(t)=g¯isi(t)dsidt(t)=si(t)τi+kδ(ttki),

where gi(t) is the synaptic conductance, τi is the time constant, and si(t) is the state variable of the i th synapse, or modeled by the NMDA receptor dynamics (Su et al., 2017)

(4) gi(t)=g¯isi(t)1+[Mg2+]ie0.062V3.56dsidt(t)=si(t)τi+αi(1si(t))kδ(ttki),

where gi(t) is the synaptic conductance, g¯i is the maximum conductance, si(t) is the state variable, τi is the time constant, αi>0 is a constant, [Mg2+]i is the extracellular concentration of Mg2+, respectively, of the i th synapse and V is the membrane voltage of the postsynaptic neuron.

Parameters of the above models can be found in Givon et al., 2017; Kakaria and de Bivort, 2017; Su et al., 2017 and in the CXcircuit Library.

Commonly used models, such as the LIF neuron and the α-synapse, are built-into the Neurokernel Execution Engine. Only the model parameters residing in the NeuroArch Database need to be specified via NeuroArch API. The Neurokernel automatically retrieves the circuit models and their parameters from the NeuroArch Database based on the last queried circuit model. For models that are not yet built-into the Neurokernel Execution Engine, such as the PSC-based model in Model B, users must provide an implementation supported by the Neurodriver API.

The 35 s visual stimulus, depicted in Figure 6 (d3), was presented to all three models. A bright vertical bar moves back and forth across the entire visual field while a second bar with lower brightness is presented at a fixed position. Figure 6 (d3) (bottom) depicts the time evolution of the visual input.

To visualize the response of the three executable circuits, the mean firing rate rj(t) of all EB-LAL-PB neurons that innervate the j th EB wedge was calculated following Su et al., 2017.

(5) rj(t)=1NjiIj(kδ(ttki)et0.7215),

where ∗ denotes the convolution operator, ?j is the index set of EB-LAL-PB neurons that innervate the j th EB wedge, whose cardinality is Nj, and tki is the time of kth spike generated by ith neuron. CXcircuit Library provides utilities to visualize the circuit response.

Jupyter notebooks for Models A, B and C used to generate Video 1 are available at https://github.com/FlyBrainLab/CXcircuits/tree/master/notebooks/elife20.

Use Case 5: analyzing, evaluating, and comparing adult antenna and antennal lobe circuit models based upon the FlyCircuit and hemibrain datasets

Request a detailed protocol

The Early Olfactory System models based on the FlyCircuit and the Hemibrain datasets were implemented using the EOScircuits Library (see also Appendix 3). The circuit architecture, shown in Figure 7b (left), follows previous work (Lazar et al., 2020a) based upon the inferred connectivity between LNs and PNs in the FlyCircuit dataset and the functional connectivity between LNs and OSNs observed in experiments (Olsen and Wilson, 2008) (see also Figure 7a (left)). Specifically, LNs are separated into two groups: a group of presynaptically-acting LNs assumed to receive inputs from all OSNs and to project to the axon terminals of each of the OSNs; another group of postsynaptically acting LNs assumed to receive inputs from all OSNs and to provide inputs to the PNs that arborize the same glomerulus. Only uniglomerular PNs are modeled and their characterization is limited to their connectivity. For the Hemibrain dataset, the exact local neurons and their connectivity within the AL circuit are used. Specifically, LNs are divided into presynaptically acting and postsynaptically acting ones based on the number of synaptic contacts onto OSNs and PNs, respectively. If the majority of synapses of an LN is targeting OSNs, it is modeled as a presynaptically acting LN. Otherwise, it is modeled as a postsynaptically acting LN. Note that the connectivity pattern in the circuit model based on FlyCircuit dataset is inferred (Lazar et al., 2020a). whereas in the circuit model based on Hemibrain dataset is extracted from the data.

At the input layer (the Antenna Circuit), the stimulus model for the adult EOS circuit builds upon affinity data from the DoOR dataset (Münch and Galizia, 2016), with physiological recordings for 23/51 receptor types. Receptors for which there is no affinity data in the DoOR dataset were assumed to have zero affinity values. Two input stimuli were used. The initial input stimulus was 5 s long; between 1 and 3 s, ammonium hydroxide with a constant concentration of 100 ppm was presented to the circuits in Figure 7b and the responses are shown in (Figure 7c). The same odorant waveform was used here as in Figure 7d. To generate the concentration waveform of the odorant, values were drawn randomly from the uniform distribution between 0 and 100 ppm every 10−4 seconds between 1 and 3 s in Figure 7d. The sequence was then filtered by a lowpass filter with a 30 Hz bandwidth (Kim et al., 2011) to obtain the concentration of the odorant.

Olfactory Sensory Neurons expressing each one receptor type processed the input odorant in parallel. The Antennal Lobe model based on FlyCircuit data is divided into two sub-circuits: (1) the ON-OFF circuit and (2) the Predictive Coding circuit (Lazar and Yeh, 2019). The ON-OFF circuit describes odorant gradient encoding by Post-synaptic LNs in the AL, while the Predictive Coding circuit describes a divisive normalization mechanism by Pre-synaptic LNs that enable concentration-invariant odorant identity encoding by Projection Neurons in the AL.

The EOS model based on Hemibrain dataset takes advantage of the detailed connectivity between neurons (see Figure 7a (right)) and introduces a more extensive connectome-synaptome model of the AL (see Figure 7b (right)). FlyBrainLab utility libraries were used to (1) access the Hemibrain data, (2) find PNs and group them by glomeruli, (3) use this data to find the OSNs associated with each glomerulus, (4) find LNs and group connectivity between OSNs, LNs and PNs. Multiglomerular PNs were not included. Contralateral LN connections were ignored. All PNs were assumed to be excitatory. An executable circuit was constructed in FlyBrainLab using the Hemibrain data. In addition to the baseline model in Figure 7b (left), the following components were introduced (1) LNs that innervate specific subsets of glomeruli, (2) LNs that provide inputs to both OSN axon terminals and to PNs dendrites, (3) synapses from PNs onto LNs.

Use Case 6: evaluating, analyzing, and comparing early Olfactory circuit models of the larva and the adult fruit flies

Request a detailed protocol

The Early Olfactory System models for both the adult and the larval flies were implemented using the EOScircuits library (see also Appendix 3). The circuit of the adult EOS follows the one described above. Similarly, the larval model is implemented using physiological recording on 14/21 receptor types (Kreher et al., 2005). In both the adult and larval physiology datasets, 13 common mono-molecular odorants were employed (see Figure 8c (legend)). Together, 13/23 odorant/receptor pairs for adult and 13/14 odorant/receptor pairs for larva were used for model evaluation, where each odorant was carried by a 100 ppm concentration waveform. In both adult and larva Antenna circuits, Olfactory Sensory Neurons expressing each receptor type processed an odorant waveform in parallel.

The adult Antennal Lobe model follows the one built on the Hemibrain data (Scheffer et al., 2020). Both the adult and the larva circuit components are parameterized by the number of LNs per type, where for instance there were 28 LNs used in the larval model in accordance to connectome data (Berck et al., 2016). In addition to neuron types, the AL circuit was modified in terms of connectivity from (1) LNs to Projection Neurons (PNs), (2) PNs to LNs and (3) LNs to LNs. Refer to Table 1 for more details.

The evaluation of both EOS models focused on the Input/Output relationship comparison between the adult and the larval EOS models. For each of the 13 odorants, the input stimulus is a 5 s concentration waveform that is 100 ppm from 1 to 4 s and 0 ppm otherwise. Both adult and larval models reach steady-state after 500 ms and the steady-state population responses averaged across 3–4 s are computed as odorant combinatorial code at each layer (i.e. OSN response, PN response).

Table 1
Neurons and neuron types used for visualization and simulation in Figure 8.
NeuropilNeuron TypeOrganismNumber (Model in Figure 8b)Number (Visualization in Figure 8a)
AntennaOlfactory Sensory NeuronAdult51 receptor types (channels), 1357 total olfactory sensory neurons1357
Larva14 receptor types (channels), 1 neuron expressing the same receptor type (14 neurons in total)21
Antennal LobeUniglomerular Projection NeuronAdult1 neuron per channel, 51 total141 (Different number per glomerulus)
Larva1 neuron per channel (14 neurons in total)21
Presynaptic Local NeuronAdult97 neurons97
Larva6 pan-glomerular neurons5
Postsynaptic Inhibitory Local NeuronAdult77 neurons77
Larva0-1 neuron per channel (11 neurons in total)7
Postsynaptic Excitatory Local NeuronAdult77 (assumed to be the same as the Postsynaptic Inhibitory Local Neuron population)77
Larva0-1 neuron per channel (11 neurons in total)7

Code availability and installation

Request a detailed protocol

Stable and tested FlyBrainLab installation instructions for user-side components and utility libraries are available at https://github.com/FlyBrainLab/FlyBrainLab for Linux, MacOS, and Windows. The installation and use of FlyBrainLab does not require a GPU, but a service-side backend must be running, for example, on a cloud service, that the user-side of FlyBrainLab can connect to. By default, the user-side-only installation will access the backend services hosted on our public servers. Note that users do not have write permission to the NeuroArch Database, nor will they be able to access a Neurokernel Server for execution. The server-side backend codebase is publicly available at https://github.com/fruitflybrain and https://github.com/neurokernel.

A full installation of FlyBrainLab, including all backend and frontend components, is available as a Docker image at https://hub.docker.com/r/fruitflybrain/fbl. The image requires a Linux host with at least 1 CUDA-enabled GPU and the nvidia-docker package (https://github.com/NVIDIA/nvidia-docker) installed. For a custom installation of the complete FlyBrainLab platform, a shell script is available at https://github.com/FlyBrainLab/FlyBrainLab.

To help users get started, a number of tutorials are available written as Jupyter notebooks at https://github.com/FlyBrainLab/Tutorials, including a reference to English queries at https://github.com/FlyBrainLab/Tutorials/blob/master/tutorials/getting_started/1b_nlp_queries.ipynb. An overview of the FlyBrainLab resources is available at https://github.com/FlyBrainLab/FlyBrainLab/wiki/FlyBrainLab-Resources.

Data availability

Request a detailed protocol

The NeuroArch Database hosting publicly available FlyCircuit (RRID:SCR_006375), Hemibrain, Medulla 7-column and Larva L1EM datasets can be downloaded from https://github.com/FlyBrainLab/datasets. The same repository provides Jupyter notebooks for loading publicly available datasets, such as the FlyCircuit dataset with inferred connectivity (Huang et al., 2018), the Hemibrain dataset, the Medulla 7-column dataset and the Larva L1 EM dataset.

Appendix 1

The architecture of FlyBrainLab

To support the study of the function of brain circuits FlyBrainLab implements an extensible, modularized architecture that tightly integrates fruit fly brain data and models of executable circuits. Appendix 1—figure 1 depicts the architecture of FlyBrainLab on both the user- and backend server-side.

The backend server-side components are described in Appendix 1.1. The user-side components are presented in Appendix 1.2.

1.1 The server-side components

The server-side backend consists of four components: FFBO Processor, NeuroArch, Neurokernel and NeuroNLP servers. They are collectively called the FFBO servers. A brief description of each of the components is given below.

FFBO Processor implements a Crossbar.io router (https://crossbar.io/) that establishes the communication path among connected components. Components communicate using routed Remote Procedure Calls (RPCs) and a publish/subscribe mechanism. The routed RPCs enable functions implemented on the server-side to be called by the user-side backend (see also Section 1.2). After an event occurs, the publisher immediately informs topic subscribers by invoking the publish/subscribe mechanism. This enables, for example, the FFBO processor to inform the user-side and other servers when a new backend server is connected. The FFBO processor can be hosted locally or in the cloud. It can also be hosted by a service provider for, for example, extra data sharing. The open source code of the FFBO processors is available at https://github.com/fruitflybrain/ffbo.processor.

NeuroArch Server hosts the NeuroArch graph database (Givon et al., 2015) implemented with OrientDB (https://orientdb.org). The NeuroArch Database provides a novel data model for representation and storage of connectomic, synaptomic, cell type, activity, and genetic data of the fruit fly brain with cross-referenced executable circuits. The NeuroArch data model is the foundation of the integration of fruit fly brain data and executable circuits in FlyBrainLab. Low-level queries of the NeuroArch Database are supported by the NeuroArch Python API (https://github.com/fruitflybrain/neuroarch). The NeuroArch Server provides high level RPC APIs for remote access of the NeuroArch Database. The open source code of the NeuroArch Server is available at https://github.com/fruitflybrain/ffbo.neuroarch_component.

Neurokernel Server provides RPC APIs for code execution of model circuits by the Neurokernel Execution Engine (Givon and Lazar, 2016). Neurokernel supports the easy combination of independently developed executable circuits towards the realization of a complete whole brain emulation. The Neurokernel Execution Engine features:

The Neurokernel Server directly fetches the specification of executable circuits from the NeuroArch Server, instantiates these circuits and transfers them for execution to the Neurokernel Execution Engine. The open source code of the Neurokernel Server is available at https://github.com/fruitflybrain/ffbo.neurokernel_component.

NeuroNLP Server provides an RPC API for translating queries written as English sentences, such as 'add dopaminergic neurons innervating the mushroom body’, into database queries that can be interpreted by the NeuroArch Server API. This capability increases the accessibility of the NeuroArch Database to researchers without prior exposure to database programming, and facilitates research by simplifying the often-demanding task of writing database queries. The open source code of the NeuroNLP Server is available at https://github.com/fruitflybrain/ffbo.nlp_component.

1.2 The user-side components

The FlyBrainLab user-side consists of the NeuroMynerva frontend and the FlyBrainLab Client and Neuroballad backend components. A brief description of each of the components is given below.

NeuroMynerva is the user-side frontend of FlyBrainLab. It is a browser-based application that substantially extends upon JupyterLab by providing a number of widgets, including a Neu3D widget for 3D visualization of fly brain data, a NeuGFX widget for exploring executable neural circuits with interactive circuit diagrams, and an InfoPanel widget for accessing individual neuron/synapse data. All widgets communicate with and retrieve data from the FlyBrainLab Client. A master widget keeps track of the instantiated widgets by the user interface. With the JupyterLab native notebook support, APIs of the FlyBrainLab Client can be directly called from notebooks. Such calls provide Python access to NeuroArch queries and circuit execution. Interaction between code running in notebooks and widgets is fully supported.

FlyBrainLab Client is a user-side backend implemented in Python that connects to the FFBO processor and accesses services provided by the connected backend servers. FlyBrainLab Client provides program execution APIs for handling requests to the server-side components and parsing of data coming from backend servers. The FlyBrainLab Client also exhibits a number of high-level APIs for processing data collected from the backend servers, such as computing the adjacency matrix from retrieved connectivity data or retrieving morphometrics data. In addition, it handles the communication with the frontend through the Jupyter kernel.

NeuroBallad is a Python library that simplifies and accelerates executable circuit construction and simulation using Neurokernel in Jupyter notebooks in FlyBrainLab. NeuroBallad provides classes for specification of neuron or synapse models with a single line of code and contains functions for adding and connecting these circuit components with one another. NeuroBallad also provides capabilities for compactly specifying inputs to a circuit on a per-experiment basis.

Core Functionalities Provided by FlyBrainLab Examples of capabilities that users can directly invoke are: (1) Query using plain English and 3D graphics for building and visualizing brain circuits; (2) Query the NeuroArch Database using NeuroArch JSON format for building and visualizing brain circuits and constructing executable circuits; (3) Retrieval of the connectivity of the brain circuit built/visualized; (4) Retrieval of the graph representing the circuit built/visualized; (5) Retrieval of the circuit model corresponding to the brain circuit built; (6) User interface and API for circuit diagram interaction; (7) Specification of models and parameters for each circuit components; (8) Execution of the circuits represented/stored in the NeuroArch Database.

Appendix 1—figure 1
The architecture of FlyBrainLab.

The server-side architecture (Ukani et al., 2019) consists of the FFBO Processor, the NeuroNLP Server, the NeuroArch Server and the Neurokernel Server. The user-side provides the local execution environment as well as an easy-to-use GUI for multi-user access to the services provided by the server-side. The FlyBrainLab Utility Libraries and Circuit Libraries (see Sections 2 and 3 for details) can be loaded into the FlyBrainLab workspace of the user-side backend.

Appendix 2

Utility libraries for the fruit fly connectome/synaptome

Different connectome and synaptome datasets are often available at different levels of abstraction. For example, some datasets come with cell types labeled and some only provide raw graph level connectivity. Without extensive analysis tools, it takes substantial manual effort to construct and test a neural circuit. FlyBrainLab offers a number of utilities to explicate the graph structure of neural circuits from raw connectome and synaptome data. In conjunction with the capability of visually constructing circuits enabled by the NeuroMynerva front-end, speeding up the process of creating interactive executable circuit diagrams can substantially reduce the exploratory development cycle.

The FlyBrainLab Utility Libraries include:

  • NeuroEmbed: Cell Classification and Cell Type Discovery,

  • NeuroSynapsis: High Level Queries and Analysis of Connectomic and Synaptomic Data,

  • NeuroGraph: Connectivity Pattern Discovery and Circuit Visualization Algorithms,

  • NeuroWatch: 3D Fruit Fly Data Visualization in Jupyter Notebooks,

  • NeuroMetry: Morphometric Measurements of Neurons.

In this section, we outline the capabilities enabled by the Utility Libraries listed above.

NeuroEmbed: cell classification and cell type discovery

The NeuroEmbed library implements a set of algorithms for structure discovery based on graph embeddings into low-dimensional spaces providing capabilities for:

  • Cell type classification based on connectivity, and optionally morphometric features,

  • Searching for neurons that display a similar connectivity pattern,

  • Standard evaluation functions for comparison of embedding algorithms on clustering and classification tasks.

NeuroSynapsis: high-level queries and analysis of connectomic and synaptomic data

The NeuroSynapsis Library offers a large set of utilities to accelerate the construction of circuits and analysis of connectomic and synaptomic data. It provides capabilities for

  • Retrieval of neuron groups according to user-defined criteria, such as cell type, innervation pattern and connectivity, etc.,

  • Retrieval of connectivity between neurons, cell types, or user-defined neuron groups, through direct or indirect connections,

  • Retrieval of synapse positions and partners for groups of neurons and the capability to filter synapses by brain region, partnership or spatial location,

  • Statistical analysis of retrieved synapses, such as the synaptic density in a brain region,

NeuroGraph: connectivity pattern discovery and circuit visualization algorithms

The NeuroGraph Library offers a set of tools to discover and analyze any connectivity pattern among cell groups within a circuit. Capabilities include:

  • Discovery of connectivity patterns between cell populations by automatic generation of connectivity dendrograms with different linkage criteria (such as Ward or average) (Ward, 1963; Sokal, 1958),

  • Analysis of the structure of circuits by community detection algorithms such as Louvain, Leiden, Label Propogation, Girvan-Newman and Infomap,

  • Analysis of neural circuit controllability, for example discovery of driver nodes (Liu et al., 2011),

  • Comparing observed connectivity between groups of cells with models of random connectivity.

In addition, the NeuroGraph Library provides utilities to visualize the connectivity of a neural circuit to aid the creation of interactive circuit diagrams. Further capabilities include

  • Force-directed layout for the architecture-level graph of the whole brain or circuit-level graph of circuits specified by NeuroSynapsis,

  • Semi-automated generation of 2D circuit diagrams from specified connectome datasets either at single-neuron or cell-type scale by separating circuit components into input, local and output populations for layouting.

NeuroWatch: 3D fruit fly data visualization in Jupyter notebooks

The NeuroWatch Library offers utilities to enable visualization of neuron morphology data using Neu3D in Jupyter Notebook cells. Capabilities include:

  • Loading brain regions in 3D mesh format,

  • Recoloring, rescaling, repositioning and rotating neuropils, neurons and synapses for visualization,

  • Interactive alignment of new neuromorphology datasets into FlyBrainLab widgets.

NeuroMetry: morphometric measurements of neurons

Morphometric measurements of neurons can be extracted from neuron skeleton data available in connectome datasets in .swc format. NeuroMetry provides utilities for

  • Calculating morphometric measurements of neurons that are compatible with NeuroMorpho.org (Ascoli et al., 2007), such as total length, total surface area, total volume, maximum euclidean distance between two points, width, height, depth, average diameter, number of bifurcations and the maximum path distance,

  • Accessing precomputed measurements in currently available datasets in FlyBrainLab, including FlyCircuit and Hemibrain data.

An application of the morphometric measurements is the estimation of energy consumption arising from spike generation in axon-hillocks (Sengupta et al., 2010).

Appendix 3

Libraries for analyzing, evaluating, and comparing Fruit FlyBrain circuits

The Circuit Libraries are built on top of the core FlyBrainLab architecture and provide tools for studying the functional logic of a specific or a set of distributed brain regions/circuits. The FlyBrainLab Circuit Library includes:

  • CXcircuits: Library for Central Complex Circuits,

  • EOScircuits: Library for Larva and Adult Early Olfactory Circuits,

  • MolTrans: Library for Molecular Transduction in Sensory Encoding.

These are respectively described in Appendix 3.1, 3.2 and 3.3 below.

3.1 CXcircuits: library for central complex circuits

The CXcircuits library facilitates the exploration of neural circuits of the central complex (CX) based on the FlyCircuit dataset (Chiang et al., 2011). It supports the evaluation and direct comparison of the state-of-the-art of executable circuit models of the CX available in the literature, and accelerates the development of new executable CX circuit models that can be evaluated and scrutinized by the research community at large in terms of modeling assumptions and biological validity. It can be easily expanded to account for the Hemibrain dataset (Scheffer et al., 2020). The main capabilities of the CX Library include programs for:

  • Constructing biological CX circuits featuring

    • A naming scheme for CX neurons that is machine parsable and facilitates the extraction of innervation patterns (Givon et al., 2017),

    • Algorithms for querying CX neurons in the NeuroArch database, by neuron type, by subregions they innervate, and by connectivity,

    • An inference algorithm for identifying synaptic connections between CX neurons in NeuroArch according to their innervation patterns,

  • Constructing executable CX circuit diagrams that

    • Are interactive for CX circuits in wild-type fruit flies,

    • Interactively visualize neuron innervation patterns and circuit connectivity,

    • Are interoperable with 3D visualizations of the morphology of CX neurons,

    • Easily reconfigure CX circuits by enabling/disabling neurons/synapses, by enabling/disabling subregions in any of the CX neuropils, and by adding neurons,

    • Readily load neuron/synapse models and parameters,

  • Evaluation of the executable CX circuits with

    • A common set of input stimuli, and the

    • Visualization of the execution results with a set of plotting utilities for generating raster plots and a set of animation utilities for creating videos.

3.2 EOScircuits library for larva and adult early olfactory circuits

The EOScircuits Library accelerates the development of models of the fruit fly early olfactory system (EOS), and facilitates structural and functional comparisons of olfactory circuits across developmental stages from larva to the adult fruit fly. Built upon FlyBrainLab’s robust execution backend, the EOScircuits Library enables rapid iterative model development and comparison for Antenna (ANT), Antennal Lobe (AL) and Mushroom Body (MB) circuits across developmental stages. ANTcircuits Modeled after the first layer of the olfactory pathway, the ANTcircuits Library builds upon the Olfactory Transduction (OlfTrans) library (see Section 3.3 below) and describes interactions between odorant molecules and Olfactory Sensory Neurons (OSNs). The library provides parameterized ANT circuits, that support manipulations including

  • Changing the affinity values of each of the odorant-receptor pairs characterizing the input of the Odorant Transduction Process (Lazar and Yeh, 2020),

  • Changing parameter values of the Biological Spike Generators (BSGs) associated with each OSN (Lazar and Yeh, 2020),

  • Changing the number of OSNs expressing the same Odorant Receptor (OR) type.

ALcircuits Modeled after the second layer of the olfactory pathway, the ALcircuits Library describes the interaction between OSNs in ANT, Projection Neurons in AL and Local Neurons in AL. The library provides parameterized AL circuits, that support manipulations including

  • Changing parameter values of Biological Spike Generators (BSGs) associated with each of the Local and Projection Neurons,

  • Changing the number and connectivity of Projection Neurons innervating a given AL Glomerulus,

  • Changing the number and connectivity of Local Neurons in the Predictive Coding and ON-OFF circuits of the AL (Lazar and Yeh, 2019).

MBcircuits Modeled after the third neuropil of the olfactory pathway, the MBcircuits Library describes the expansion-and-sparsification circuit consisting of a population of Antennal Lobe Projection Neurons and Mushroom Body Kenyon Cells (KCs) (Lazar et al., 2020a). The library provides a parameterized MB subcircuit involving Kenyon Cells and the Anterior Posterior Lateral (APL) neuron, and supports circuit manipulations including

  • Generating and changing random connectivity patterns between PNs and KCs with varying degree of fan-in ratio (number of PNs connected to a given KC),

  • Changing the strength of feedback inhibition of the APL neuron.

3.3 MolTrans library for molecular transduction in sensory encoding

The Molecular Transduction Library accelerates the development of models of early sensory systems of the fruit fly brain by providing (1) implementations of transduction on the molecular level that accurately capture the encoding of inputs at the sensory periphery, and (2) activity data of the sensory neurons such as electrophysiology recordings for the validation of executable transduction models. The MolTrans Library includes the following packages:

  • Olfactory Transduction (OlfTrans): Molecular Transduction in Olfactory Sensory Neurons,

  • Visual Transduction (VisTrans): Molecular Transduction in Photoreceptors.

The capabilities of the two libraries are discussed in what follows.

OlfTrans: Odorant Transduction in Olfactory Sensory Neurons

The OlfTrans Library (https://github.com/FlyBrainLab/OlfTrans) provides the following capabilities (see also Lazar and Yeh, 2020):

  • Defines a model of odorant space for olfactory encoding in the adult and larva olfactory system,

  • Hosts a large number of electrophysiology data of OSNs responding to different odorants with precisely controlled odorant waveforms (Kim et al., 2011).

Moreover, the OlfTrans Library offers

  • The model of an odorant transduction process (OTP) validated by electrophysiology data and executable on Neurokernel/NeuroDriver,

  • Algorithms for fitting and validation of OTP models with electrophysiology data of the Olfactory Sensory Neurons,

  • Algorithms for importing odorant transduction models and data into NeuroArch and execution on Neurokernel.

The OlfTrans Library provides critical resources in the study of any subsequent stages of the olfactory system. It serves as an entry point for discovering the function of the circuits in the olfactory system of the fruit fly.

VisTrans: PhotoTransduction in Photoreceptors

The VisTrans Library exhibits the following features and/or capabilities (see also Lazar et al., 2015a):

  • A geometrical mapping algorithm of the visual field onto photoreceptors of the retina of the fruit fly,

  • A molecular model of the phototransduction process described and biologically validated in Song et al., 2012,

  • A parallel processing algorithm emulating the visual field by the entire fruit fly retina,

  • Algorithms for importing phototransduction models into the NeuroArch Database and for program execution on the Neurokernel Execution Engine.

  • Algorithms for visually evaluating photoreceptor models.

The VisTrans Library accelerates the study of the contribution of photoreceptors towards the overall spatiotemporal processing of visual scenes. It also serves as an entry point for discovering circuit function in the visual system of the fruit fly (Lazar et al., 2020b).

Appendix 4

Creating an interactive executable circuit model of the lamina cartridge

In this appendix section we walk through an example of creating an interactive executable circuit of the lamina cartridge. The starting point of the example is the connectomic data of a lamina cartridge. Through the example, we will highlight core FlyBrainLab capabilities to load data, to query and analyze data, to create interactive circuit diagram and to execute the resulting circuit.

The example here is accompanied by a Jupyter notebook available at https://github.com/FlyBrainLab/Tutorials/blob/master/tutorials/cartridge/Cartridge.ipynb. The notebook is intended to be used inside NeuroMynerva. Running the code requires a full FlyBrainLab installation (see Code Availability and Installation), and write access to the NeuroArch server (default for a full installation).

For simplicity, start a new NeuroArch server connected to an empty database folder that will be populated with the cartridge data. After running ‘start.sh’ to start FlyBrainLab (see also https://github.com/FlyBrainLab/FlyBrainLab/wiki/How-to-use-FlyBrainLab-Full-Installation for instructions), run ‘run_neuroarch.sh lamina lamina’, where the first ‘lamina’ refers to the database folder and the second refers to the dataset name. Start now an NLP server using any of the named applications (such as hemibrain, flycircuit or medulla) by running ‘run_nlp.sh medulla lamina’. Here ‘medulla’ refers to the NLP application name, and ‘lamina’ refers to the dataset name. The latter should match the dataset name of the NeuroArch Server. In NeuroMynerva, configure a new FlyBrainLab workspace, called ‘adult (lamina)’, to connect to the lamina dataset (see https://github.com/FlyBrainLab/FlyBrainLab/wiki/Installation for instructions regarding how to add new servers/datasets).

In NeuroMynerva, start a new FlyBrainLab workspace using the lamina configuration. Connect the Python kernel of the notebook to the kernel of the new FlyBrainLab workspace. The code below is ready to run in the created workspace.

[1]: import flybrainlab as fbl
         import flybrainlab.query as fbl_query
         import flybrainlab.circuit as circuit 
         import pandas as pd 
         import numpy as np 
         import seaborn as sns 
         %matplotlib inline 
         import matplotlib.pyplot as plt

The following code obtains the FlyBrainLab Client object that is automatically created when launching a new workspace. It also makes sure that NeuroNLP, NeuroGFX and the client object can communicate with each other.

[2]: client = fbl.get_client() 
         for i in fbl.widget_manager.widgets: 
      if fbl.widget_manager.widgets[i].widget_id not in\ 
          fbl.client_manager.clients[ 
            fbl.widget_manager.widgets[i].client_id
          ]['widgets']: 
        fbl.client_manager.clients[ 
          fbl.widget_manager.widgets[i].client_id
          ]['widgets'].append( 
            fbl.widget_manager.widgets[i].widget_id)

4.1 Loading the connectome datasets into the NeuroArch database

Data can be loaded into NeuroArch database from the FlyBrainLab frontend by using the NeuroArch_Mirror class in the query module that mirrors the high-level NeuroArch API.

[3]: db = fbl_query.NeuroArch_Mirror(client)

We first create a species of Drosophila melanogaster:

[4]: species=db.add_Species(Drosophila melanogaster, 
                stage = 'adult', 
                sex = 'female', 
                synonyms = [
                     'fruit fly',
                     'common fruit fly',
                     'vinegar fly'
                ])

Then we create a data source under the species:

[5]: data_source = db.add_DataSource(
            'cartridge', 
            version = '1.0', 
            url = 'https://doi.org/10.1016/j.cub.2011.10.022', 
            description = 'Rivera-Alba et al.,\
                  Current Biology 2011', 
            species = list(species.keys())[0])

Make the above data source default.

[6]: db.select_DataSource(list(data_source.keys())[0])

[FBL NA 2021-01-18 10:53:11,717] Default datasource set

Add the lamina neuropil.

[7]: lam = db.add_Neuropil('LAM(L)', 
            synonyms = ['left lamina'])

Create function to read neuron skeleton data:

[8]: def load_swc(file_name): 
      df = pd.read_csv(file_name, 
            sep = ' ', 
            header = None, 
            comment = '#', 
            index_col = False, 
            names = [
                'sample', 'identifier',
                'x', 'y', 'z', 'r', 'parent'], 
            skipinitialspace = True) 
     return df 
   neuron_list = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6',
        'L1', 'L2', 'L3', 'L4', 'L5', 'T1',
        'a1', 'a2', 'a3', 'a4', 'a5', 'a6',
        'C2', 'C3'] 
   swc_dir = 'swc'

Reading synapse data:

[9]: connections = pd.read_csv('connection.csv', index_col = 0) 
         neuron_order = connections.columns.to_list() 
         adjacency = connections.to_numpy()

Loading Neuron data into database

[10]: for neuron in neuron_list: 
      df = load_swc('{}/{}.swc'.format(swc_dir, neuron)) 
      morphology = {'x': (df['x']*0.04).tolist(),
          'y': (df['y']*0.04).tolist(),
          'z': (df['z']*0.04).tolist(),
          'r': (df['r']*0.04).tolist(),
          'parent': df['parent'].tolist(),
          'identifier': [0]*(len(df['x'])),
          'sample': df['sample'].tolist(),
          'type': 'swc'} 
      arborization = [] 
      arborization.append(
        {'type': 'neuropil',
        'dendrites': {
          'LAM(L)': int(connections.loc[neuron].sum())},
        'axons': {
          'LAM(L)': int(connections[neuron].sum())}
        }) 
      db.add_Neuron(neuron, # uname 
          neuron, # name 
          referenceId = neuron, #referenceId 
          morphology = morphology, 
          arborization = arborization)

Loading synapse data into database

[11]: for post_ind, pre_ind in zip(*np.nonzero(adjacency)): 
            pre_neuron = neuron_order[pre_ind] 
            post_neuron = neuron_order[post_ind] 
           db.add_Synapse(pre_neuron, post_neuron,
           int(adjacency[post_ind][pre_ind]))

4.2 Building and exploring cartridge pathways

[12]: # provide a wild-card regular expression to match all neuron names 
    res1 = client.executeNLPquery('show all')

[FBL NLP 2021-01-18 10:53:34,363] NLP successfully parsed query.

Appendix 4—figure 1
A lamina cartridge visualized in the NeuroNLP window.

Obtaining the connectivity matrix between neurons that are displayed in the NeuroNLP window.

[13]: g = client.get_neuron_graph(synapse_threshold = 0)
    M, order = g.adjacency_matrix() 
    sns.heatmap(M, xticklabels = order, yticklabels = order);
Appendix 4—figure 2
The connectivity matrix of the lamina cartridge.

4.3 Interactive exploration of the cartridge circuit diagram

Next, we interactively build an executable circuit using a circuit diagram manually created based on the connectivity above. We construct an ExecutableCircuit object from the NLP query result above. In this case, there is no model associated with these neurons yet. It initializes a new executable circuit.

[14]: c = circuit.ExecutableCircuit(client, res1, 
                                           model_name = 'cartridge', version = '1.0')
Initializing a new executable circuit

We then load an SVG circuit diagram manually created and make it interactive through injecting a piece of standardized JavaScript code into the neuGFX widget. These can all be done easily using the ExecutableCircuit API. The first JavaScript file below defines the additional neuron models needed, the PhotoreceptorModel from the VisTrans Library. The second governs the interaction on the diagram in the NeuroGFX window.

[15]: filename = 'cartridge.svg' 
    jsmodeldef = 'update_available_models.js' 
    jsfilename = 'onCartridgeLoad.js'
    c.load_diagram(filename)
    c.load_js(jsmodeldef)
    c.load_js(jsfilename)

sending circuit configuration to GFX

Appendix 4—figure 3
A circuit diagram of the lamina cartridge.

Now, we can interact with the diagram to highlight a neuron and its connected neurons, choose the model implementation for each of the cells by right clicking it, single click to silent/rescue the neuron. For example, right clicking on R1 neuron and choose PhotoreceptorModel as its model.

Appendix 4—figure 4
A screenshot of the model library in NeuroGFX for selecting the neuron model and specifying the parameters.

The same model can be populated to other photoreceptors R2-R6 by sending circuit configuration to GFX

[16]: c.update_model_like(['R{}'.format(i) for i in range(2,7)], 'R1')

Repeat this for other neurons but use a non-spiking MorrisLecar model. You can update the model and parameters on the diagram or by the code below.

[17]: c.update_model('L2', {'V1': -20.0,
            'V2': 50.0,
            'V3': -40.0,
            'V4': 20.0,
            'phi': 0.1,
            'offset': 0.0,
            'V_L': -40.,
            'V_Ca': 80.0,
            'V_K': -80.0,
            'g_L': 15.0,
            'g_Ca': 2.0,
            'g_K': 10.,
            'name': 'MorrisLecar'}, 
        states = {'V': -46.08, 'n': 0.3525})

    c.update_model_like(['L1', 'L3', 'L4', 'L5', 'T1',
            'C2', 'C3', 'a1', 'a2', 'a3',
            'a4', 'a5', 'a6'],
            'L2')

sending circuit configuration to GFX

sending circuit configuration to GFX

Now we get all the synapses in the circuit. In particular, the photoreceptors express the histamine neurotransmitter that is inhibitory.

[18]: update_models = {} 
    for rid, v in c.get('Synapse').items(): 
      update_models[v['uname']] = {
        'params': {'name': 'SigmoidSynapse',
            'reverse': -80.0 if \ 
                v['uname'].split('--')[0][0] == 'R'\ 
                else 0,
            'threshold': -50.5,
            'slope': 0.05,
            'gmax': 0.04,
            'scale': c.graph.nodes[rid]['N']
            },
        'states': {'g': 0.0}}
    c.update_models(update_models)

sending circuit configuration to GFX

Finally, all the neurons and synapses have been configured. We write the executable circuit to the database with name cartridge and version 1.0.

[19]: c.flush_model()

4.4 Execution of the model circuit with the neurokernel execution engine

With the modeling data stored in the database, we can issue commands to execute the circuit in the Neurokernel Execution Engine. First we remove components that have been disabled.

[20]: res = c.remove_components()

We define the duration and time step of simulation

[21]: dur = 2.0
    dt = 1e-4
    steps = int(dur/dt)

We then define inputs to the circuit. Here we present a step input from time 0.5 s to 1.5 s at the light intensity equivalent to 10,000 photons per second, to the six photoreceptors. We also specify to return the inputs to the frontend.

[22]: input_processors = {'LAM(L)':
    [{'class': 'StepInputProcessor',
        'name': 'LAM(L)',
        'module': 'neurokernel.LPU.InputProcessors.StepInputProcessor',
        'variable': 'photon',
        'uids': [c.find_model(c.uname_to_rid['R{}'.format(i)]).
    popitem()[0] for i in range(1,7)],
        'val': 1e4,
        'start': 0.5,
        'stop': 1.5,
        'input_file': 'LAM_input.h5',
        'input_interval': 10}
        ]}

Next, we choose to record responses of the circuit with the ‘Record’ class and specify the variables and components to record. Here we request to return the membrane voltage ‘V’ of all neurons as specified by None in the ‘uids’ field.

[23]: output_processors = {'LAM(L)':
        [{'class': 'Record',
        'uid_dict': {'V': {'uids': None}},
        'sample_interval': 10}
        ]}

Execute the circuit. The execution will be queued and returned immediately to allow you to further explore the circuit or to work on other circuits while waiting for the execution result.

[24]: c.execute(input_processors = input_processors, 
        output_processors = output_processors, 
        steps = steps, dt = dt)

[FBL NK 2021-01-18 11:04:35,464] Execution request sent. Please wait.

[FBL NK 2021-01-18 11:04:35,464] Job received. Currently queued #1

[FBL GFX 2021-01-18 11:04:51,793] Receiving Execution Result for cartridge/1.0.

Please wait ...

[FBL GFX 2021-01-18 11:04:51,883] Received Execution Result for cartridge/1.0.

Result

stored in Client.exec result['cartridge/1.0']

A message like the above will be displayed to notify you that the result for th circuit cartridge/1.0 has been returned. Using the following method, the result will be reorganized to be referenced by the names of the neurons/synapses.

[25]: result = c.get_result('cartridge/1.0')

Plot the inputs to all photoreceptors and the response of R1 and L1 neurons.

[26]: client.plotExecResult('cartridge/1.0', outputs = ['R1', 'L1'])
Appendix 4—figure 5
Inputs to the photoreceptors used during the execution of a full lamina cartridge circuit.
Appendix 4—figure 6
The output voltage of the R1 photoreceptor and L1 neuron of the lamina cartridge circuit.

4.5 Retrieving the executable circuit from the NeuroArch database

Now let’s clear the workspace and fire the same NLP query as before.

[27]: res1 = client.executeNLPquery('show all')

[FBL NLP 2021-01-18 11:05:28,105] NLP successfully parsed query.

This time we can retrieve the executable circuit from the database that we just wrote, using the same methods as before.

[28]: c = circuit.ExecutableCircuit(client, res1)

Please select from the exisiting models to initialize the executable

circuit, or

press a to abort

0: cartridge version 1.0 (rid #457:0) 0

#457:0

Sending circuit configuration to GFX

We were asked to choose from a list of executable circuits that models the circuit displayed in NeuroNLP window. Here only one such model exists, which is the one we created in Appendix 4.3.

This time we disable R2-R6, a1-a6, L3 and T1 neurons on the circuit diagram. Equivalently, we can issue the following command: sending circuit configuration to GFX

[29]: c.disable_neurons(['R{}'.format(i) for i in range(2, 7)] + \
        ['a{}'.format(i) for i in range(1, 7)] + \
        ['L3', 'T1'])

sending circuit configuration to GFX

Appendix 4—figure 7
A lamina cartridge with several ablated neurons.
Appendix 4—figure 8
A reconfigured lamina cartridge obtained by disabling a number of neurons in the interactive circuit diagram.

And then reflect this change in the database before executing it.

[30]: res = c.remove_components()
[31]: dur = 2.0
    dt = 1e-4
    steps = int(dur/dt) 

    input_processors = {'LAM(L)':
        [{'class': 'StepInputProcessor',
        'name': 'LAM(L)',
        'module': 'neurokernel.LPU.InputProcessors.StepInputProcessor',
        'variable': 'photon',
        'uids': [c.find_model(c.uname_to_rid['R1']).popitem()[0]],
        'val': 1e4,
        'start': 0.5,
        'stop': 1.5,
        'input_file': 'LAM_input.h5',
        'input_interval': 10}
        ]} 
    output_processors = {'LAM(L)':
        [{'class': 'Record',
        'uid_dict': {'V': {'uids': None}},
        'sample_interval': 10}
        ]}
[32]: c.execute(input_processors = input_processors, 
    output_processors = output_processors, 
    steps = steps, dt = dt)

[FBL NK 2021-01-18 11:18:47,961] Execution request sent. Please wait.

[FBL NK 2021-01-18 11:18:47,962] Job received. Currently queued #1

[FBL GFX 2021-01-18 11:19:05,231] Receiving Execution Result for cartridge/1.0.

Please wait …

[FBL GFX 2021-01-18 11:19:05,280] Received Execution Result for cartridge/1.0.

Result stored in Client.exec_result[’cartridge/1.0’]

[33]: result = c.get_result('cartridge/1.0')

Finally, we plot the input to the R1 neuron and the responses of the R1 and L1 neurons. Here, the L1 neuron only receives input from 1 photoreceptor as compared to 6 in Appendix 4.4.

[34]: client.plotExecResult('cartridge/1.0', outputs = ['R1', 'L1'])
Appendix 4—figure 9
Input to the photoreceptor in the reconfigured lamina cartridge circuit.
Appendix 4—figure 10
Voltage responses of the R1 photoreceptor and L1 neuron of the reconfigured lamina cartridge circuit.

Data availability

General information about the FlyBrainLab is available at https://www.fruitflybrain.org. Stable and tested FlyBrainLab installation instructions are available at https://github.com/FlyBrainLab/FlyBrainLab. An overview of the FlyBrainLab resources can be found at the FlyBrainLab Resource wiki page at https://github.com/FlyBrainLab/FlyBrainLab/wiki/FlyBrainLab-Resources. It includes links to individual code repositories for components, libraries and tutorials. The NeuroArch Database hosting publicly available FlyCircuit, Hemibrain, Medulla 7-column and Larva L1EM datasets can be downloaded from https://github.com/FlyBrainLab/datasets. The same repository provides Jupyter notebooks for loading publicly available datasets, such as the FlyCircuit dataset with inferred connectivity, the Hemibrain dataset, the Medulla 7-column dataset and the Larva L1 EM dataset.

The following previously published data sets were used

References

    1. Blondel VD
    2. Guillaume J-L
    3. Lambiotte R
    4. Lefebvre E
    (2008)
    Fast unfolding of communities in large networks
    Journal of Statistical Mechanics: Theory and Experiment 2008:P10008.
    1. Ellson J
    2. Gansner E
    3. Koutsofios L
    4. North SC
    5. Woodhull G
    (2001)
    International Symposium on Graph Drawing
    pages 483–484, Graphviz—open source graph drawing tools, International Symposium on Graph Drawing, Springer.
  1. Book
    1. Hausen K
    (1984) The lobula-complex of the fly: structure, function and significance in visual behaviour
    In: Ali M. A, editors. Photoreception and Vision in Invertebrates, NATO ASI Series (Series A: Life Sciences), Chapter the Lobula-Complex of the Fly: Structure, Function and Significance in Visual Behaviour. Springer. pp. 523–559.
    https://doi.org/10.1007/978-1-4613-2743-1_15
  2. Conference
    1. Lazar AA
    2. Psychas K
    3. Ukani NH
    4. Zhou Y
    (2015b) Retina of the fruit fly eyes: a detailed simulation model
    BMC Neuroscience 16(Suppl 1): P301, 24th Annual Computational Neurocience Meeting. July 18-23, 2015, Prague, Czech Republic.
    https://doi.org/10.1186/1471-2202-16-S1-P301
  3. Conference
    1. Lazar AA
    2. Liu T
    3. Yeh C-H
    (2020a) An odorant encoding machine for sampling, reconstruction and robust representation of odorant identity
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020). pp. 1743–1747.
    https://doi.org/10.1109/ICASSP40776.2020.9054588
  4. Conference
    1. Lazar AA
    2. Yeh C-H
    (2019) Predictive coding in the Drosophila antennal lobe
    BMC Neuroscience, 20(Suppl 1):P346, 2019. 28th Annual Computational Neuroscience Meeting, July 13-17, 2019, Barcelona, Spain.
    https://doi.org/10.1186/s12868-019-0538-0
    1. Lazar AA
    2. Yeh C-H
    (2020) A molecular odorant transduction model and the complexity of spatio-temporal encoding in the Drosophila antenna
    PLOS Computational Biology 16:e1007751. Original version posted on bioRxiv (https://doi.org/10.1101/237669), December 2017.
    https://doi.org/10.1371/journal.pcbi.1007751
    1. Skaggs WE
    2. Knierim JJ
    3. Kudrimoti HS
    4. McNaughton BL
    (1995)
    A model of the neural basis of the rat’s sense of direction
    Advances in Neural Information Processing Systems 7:173–180.
    1. Sokal RR
    (1958)
    A statistical method for evaluating systematic relationships
    Univ. Kansas, Sci. Bull 38:1409–1438.

Decision letter

  1. Upinder Singh Bhalla
    Reviewing Editor; Tata Institute of Fundamental Research, India
  2. Ronald L Calabrese
    Senior Editor; Emory University, United States
  3. Padraig Gleeson
    Reviewer; University College London, United Kingdom

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

FlyBrainLab is a resource for driving connectomic analyses of the Drosophila brain and for carrying out computational modeling based on multiple data sources. It supports 3D visualization of datasets published in the worldwide literature, and a number of libraries for integrating anatomical, sensory and physiological data with published and exploratory computational models. It will be useful for a wide range of activities, from exploring the content and intersection of datasets, to comparing circuit models in the same computational setting, to running massively parallel circuit simulations.

Decision letter after peer review:

Thank you for submitting your article "FlyBrainLab:Accelerating the Discovery of the Functional Logic of the Drosophila Brain in the Connectomic/Synaptomic Era" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Ronald Calabrese as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Padraig Gleeson (Reviewer #1).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

As the editors have judged that your manuscript is of interest, but as described below that additional experiments are required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

Summary:

This manuscript outlines the FlyBrainLab platform, which brings together a number of software packages from the authors to provide a unified interface for viewing data and simulating neuronal activity related to Drosophila. The reviewers felt that the paper had promise but there was substantial work still to be done.

Essential revisions:

1) Could the authors provide substantially more detail on how an experimentalist would use the package? It should be clear why they would want to do so.

2) The manuscript must provide transparency on the data processing and

integration.

3) The package would be far more user-friendly if it had much simpler installation. Detailed instructions would help too.

4) Users would benefit from a process to keep the packages up-to-date, such as for the “hemibrain” module.

In addition, the reviewers have provided many helpful comments to help the authors with their revision.

Reviewer #1:

This manuscript outlines the FlyBrainLab platform, which brings together a number of software packages from the authors to provide a unified interface for viewing data and simulating neuronal activity related to Drosophila.

The application is well described and examples of its use are given. The code for the application components is open source and installation instructions and documented are provided. The suite of components clearly work well together providing a very good example of a user focussed computational neuroscience application for working with advanced data and models.

1) While much of the technical/implementation detail is reserved for the Materials and methods section, the main body of the manuscript would benefit from a high level diagram of the structure of the application (like Supplementary figure 1, or even simpler), or a table defining/summarising the various components mentioned in the main text (NeuroMinerva/CxCircuit/NeuroArch etc.) and how they related to each other.

Reviewer #2:

FlyBrainLab by Lazar et al. provides the ability to set up, execute, and analyze Drosophila neural circuits, while integrating/exploring connectomics data, in a single platform. Such a unified framework has the potential to advance our understanding of the functional logic of the fly brain. The authors show that FlyBrainLab tools can be used to execute models developed previously in the literature. What is missing, however, is a clear demonstration that the platform can be used for de novo exploration and guidance on how the tools offered by the platform will enable new discoveries. In particular, the case is not made that using this library provides an easier path to discovery than the normal ad-hoc approach. The work does not fully describe what a user needs to do to deploy it for their own studies, nor does it clearly show how its own examples were generated.

1) Across the circuit examples supplied in the manuscript, it is not clear what features need to be manually coded up for the particular circuit/question of interest vs. what features can be pulled from FlyBrainLab and directly used. At present, the discussion of the different libraries in the supplement lists capabilities, but there is no guidance or examples of how the libraries can be used in practice. We could not find documentation for CXcircuits, EOScircuits, and MolTrans online. Similarly, the supplementary video illustrates the interactive capabilities of the platform, but the manuscript does not guide the user in replicating these capabilities on their own. To fix this, we advise the authors to include the notebooks used to generate all of the figures/analysis in the main results as supplementary files, with detailed annotation so that a user can use them as starting points for their own analyses.

2) More must be included in the manuscript to describe how the tool can be used for exploratory analysis. Consider including a simple annotated code walkthrough that, starting with some list of neurons, perhaps from the Hemibrain, answers what utilities are available/what code is needed to visualize neuron morphologies, what code is needed to generate an interactive circuit diagram, what code is needed to set up a simple leaky integrate and fire model, what is needed to execute a circuit, and whether resultant firing rate outputs look reasonable. The panels in Supplementary figure 3 are close, but they show the results of the above workflow, and there is no demonstration on how one can get there. Such an example need not (and perhaps is better not to) focus on a well-characterized circuit. The simple examples found in FlyBrainLab/Neuroballad are promising.

3) More work can be done to lower the barrier of entry for FlyBrainLab. Even as a researcher with a few years of Python experience that is currently using the Hemibrain to set up, run, and analyze neural circuits, I had difficulty installing FlyBrainLab and knowing what steps to take to replicate the examples shown in the manuscript. In particular, the installation instructions seem inconsistent/not fully developed on https://github.com/FlyBrainLab/FlyBrainLab. It took hours to figure out which instructions to follow to end up with a Jupyter Lab configuration that resembles the supplementary video, with a notebook, a morphology viewer, and a circuit viewer in the same window. The installation instructions within NeuroMinerva, built on JupyterLab version >2, helped get me to that point, but the instructions on FlyBrainLab, built on JupyterLab version <2, did not get me to that point. In addition, the "Starting Up FlyBrainLab" section on https://github.com/FlyBrainLab/FlyBrainLab should have material on what to do if you do not see an FFBO section or cannot run the example notebook, perhaps in some troubleshooting page.

Reviewer #3:

Lazar and colleagues present a platform, FlyBrainLab that integrates Drosophila neuron and circuit modelling data with neuroanatomy, from morphology to synaptic resolution information. Their desktop system is modular and stand-alone, providing the ability to query, run and visualise particular circuits and models. To demonstrate the functionality of their platform they present 3 specific examples that cover the use of published models, light and electron-microscopy (EM) data and the comparison between larva and adult.

Although the need their platform is addressing is real, the manuscript does not present the work in a compelling way, particularly for this journal's audience. Furthermore, the methods used to integrate data, and how data are used are not described properly. If a system such as this aims to become a standard analytical tool for neuroscientists, it is essential that data integration and processing are transparent.

Please find below a number of concerns. I do not comment on the technical details of the FlyBrainLab platform modules, as that is not my expertise.

1) The structure of the manuscript and the way the examples are presented are not compelling for the average neuroscientist that wants to start using the public data (models, connectome and synaptome). Especially if the one of the main draws of this type of platform is for neuroscientists to start testing models based on real data. The main reason for this is that very little information is given on how experimental data is curated and integrated (see below for more).

2) What neuroanatomical data is being used and in what way is completely opaque. It is assumed that different modalities of data will have been processed in different ways, but very little information is given in this regard. How is the light-level FlyCircuit data processed to infer connectivity and how is this process validated? How are cell types identified and validated, in FlyCircuit and the hemibrain? How many neurons and types are used for each use case?

For example, regarding the CX circuit example, the authors say, "The innervation pattern of each neuron was visually examined in the NeuroNLP window and a standard name assigned according to the naming scheme adopted in the CXcircuit Library." How do these standard names relate to the cell type names used by the community? Identifying cell types from morphological data requires expertise when this is done to the highest resolution, and thus this process should be described in detail. In addition, it becomes very difficult to assess the use cases presented when there is no clarity on what neurons and types are being used.

3) Related to the point above, the authors list the hemibrain data used is from version 1.0.1 (gs://hemibrain-release/neuprint/hemibrain_v1.0.1_neo4j_inputs.zip). However, a new version of the data (1.1) was released online in May, with the data dumps available at least from the end of June (according to https://dvid.io/blog/release-v1.1/). The latest version significantly improves the cell typing that had been released (see https://docs.google.com/document/d/1vae3ClHR8z8uekqwrOHtqiux3oY5-Y_xw6W2srCi3PI/edit?usp=sharing). The authors should update their manuscript to use the latest version of data. This should highlight issues of how data can be kept up to date in these types of platforms and how integration of versions can be achieved. The authors should comment on the processes they use for this.

4) Presenting this platform as a Resource, it becomes essential that it is easy to install. I attempted to install FlyBrainLab according to the instructions in https://github.com/FlyBrainLab/FlyBrainLab. Using miniconda on macOS, which I already had installed for other purposes, I unfortunately ran into errors, and the installation was unsuccessful (seemingly caused by msgpack not being found). The instructions mention that the platform has only been tested in Ubuntu but that it "should work" in other platforms. I understand that it is not possible to test for and avoid, all possible errors, but the authors should test the installation in at least one other OS, if they want the average neuroscientist to start using it.

The tutorials listed in https://github.com/FlyBrainLab/Tutorials are certainly a very useful introduction, although they suffer from the issues in points 2 and 3.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for submitting your article "Accelerating with FlyBrainLab the Discovery of the Functional Logic of the Drosophila Brain in the Connectomic Era" for consideration by eLife. Your article has been overseen by a Reviewing Editor and Ronald Calabrese as the Senior Editor.

The Reviewing Editor has drafted this decision to help you prepare a revised submission.

As the editors have judged that your manuscript is of interest, but as described below that additional work is required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

Summary:

The revised manuscript has addressed some of the technical issues, but has not addressed the core issues of readability of the manuscript , and usability of the software, by a regular fly neurobiologist. This was stated in the Essential revisions, point 1: "1. Could the authors provide substantially more detail on how an experimentalist would use the package? It should be clear why they would want to do so."

While the authors have responded with some limited explanations in the cover letter, the required changes are not evident in the manuscript, and it is there that these essential points of usability must be clarified. Again, it is not sufficient to refer the reader to the website to do this. The appendices, and much of the text, still mostly tell the reader what can be done, rather than how to do it. This should be rather early in the manuscript to motivate what follows.

Similarly, on essential point 2, the reviewers would like to know how their data goes in and is manipulated, and how they can be confident that what the program does is faithful to the original. Issues of installation, which have been presented, are relevant, but of secondary technical importance.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for submitting your article "Accelerating with FlyBrainLab the Discovery of the Functional Logic of the Drosophila Brain in the Connectomic Era" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Ronald Calabrese as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Padraig Gleeson (Reviewer #1); Danylo Lavrentovich (Reviewer #2).

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential Revisions:

The reviewers and I felt that the paper and FlyBrainLab provide a userful resource for the field. The revised version is considerably improved and the reviewers would like to suggest a few essential but straightforward revisions to make it even more accessible to the readers and users of this resource.

1) Update key references (indicated in the detailed reviews).

2) Clarify Figures and their legends, especially Figure 2 and 3.

There are several further important suggestions by the reviewers to strengthen the presentation and improve the accessibility of the paper and resource for readers. These are provided in the detailed reviewer comments below.

Reviewer #1:

This new version of the manuscript has a better layout and will be a more useful introduction to the application for new users. However there are still some issues with how the structure of the application is presented which may be difficult for readers.

Figure 2, especially the legend is quite minimal and there is nothing here to give a reader the key idea that this is a graphical application which a user would interact with through their browser. I suggest to move the screenshot of the application from Appendix 1—figure 2 to a panel in the main figure 2, and make sure these panels are well integrated and explained, e.g. NeuroMynerva in the top panel is what you see in the bottom. Refer the reader here to Appendix 1—figure 1 for more details (some of the colors of the blocks match between the simplified/full versions, e.g. green NeuroArch, there's no reason they all shouldn't for ease of readability).

NeuroNLP and NeuroGFX (window) are mentioned in the text without any context. These need to be shown/explained in Figure 2 and also described briefly where NeuroMynerva etc. are first defined in the Introduction. I would suggest highlighting all important component names in bold where they are first introduced so a user can go back to the definition as they are discussed later in the text.

It is strange that the actual short English language queries used for Figure 3 are not mentioned in the legend or main text. This is an important feature of the application and adding (at least some of) the sequence of commands for one of the panels (e.g. 3a, "show T4a", "color red", "add cholinergic presynaptic neurons" etc.) in another panel/table in the figure would be quite informative for readers. In the main text "(see also Materials and methods)" could be replaced with something better like: (the full sequence of queries which created this panel can be found in the Materials and methods). Also explain that the panels in Figure 3 are screenshots of the NeuroNLP window in Figure 2B, etc.

It is good having a section in the Materials and methods for each of other figures related to the main use cases/examples, but these could be tied together better also, making it clearer that the details of how the figure panels were generated can be found in the Materials and methods. Also some parts of the Materials and methods do not refer back to the figures, e.g. "Model A [26], Model B [27] and Model C [28]" could refer to Figure 6A, B, C etc. Small things like this would improve the readability of the paper significantly.

It might also be worth numbering the use cases/analysis types, e.g. Use Case 1-6, and adding these to subheadings to make it easier to move between the main text and Materials and methods.

Overall the manuscript is a good introduction to the range of features FlyBrainLab offers and is structured such that a user can see what can be accomplished, and is given some guidance how they would achieve it themselves.

Reviewer #2:

The text is clearer and more inviting for a general audience. The enumeration of capabilities in the Introduction is effective. The Results section is structured well, displaying different use cases of FlyBrainLab. The accompanying tutorials online serve as good launching points for researchers.

Thank you to the authors for the additions in the main text, the code walkthroughs in the appendices, and the improved installation instructions. The basic tutorials are simple to follow. My only suggestion on the code side is to be more verbose in the introduction to the lamina cartridge executable circuit notebook and in the limitations of the user-side-only installation.

Reviewer #3:

The revised version of the manuscript addresses many of the concerns previously reported. Thank you to the authors for providing much clearer information about the data that is ready to use in the FlyBrain Lab platform, how it can be used, installed and the components of the FlyBrainLab. There are still some corrections that are needed regarding the source of some of the datasets.

Throughout the paper, reference 4 (Xu et al., 2020) is used as the citation for the hemibrain dataset. This is a preprint that has been superseded by the publication in September 2020 of the peer-reviewed paper (Scheffer et al., 2020, https://doi.org/10.7554/eLife.57443). It also needs updating in GitHub (https://github.com/FlyBrainLab/Datasets#ref-1)

The reference to the larval L1EM dataset also needs correcting. For example this is given as reference 2 (Berck et al., 2016). The correct reference, as correctly shown in https://github.com/FlyBrainLab/Datasets#ref-3, is Ohyama et al., 2015 (reference 69). There might be other instances in the text that use the wrong citation.

The section added to the beginning of the Results, which includes Figure 3, provides readers with some examples on how they can start exploring the data in the platform (published datasets) using plain English queries. However, I do not think the added Figure 3 currently presents the data in a way that makes it easy for readers to link the relevant text and figure legend that describe the connectivity, to the panels. Each of the 4 examples (a-d) displays a neuron plot (left) and a connectivity matrix (right); other than reading each of the row/column names it is not possible to link the neurons plotted on the left to the data plotted on the right. Adding a colored annotation bar or even coloring the row/column names of the connectivity matrices according to the neuron plots would certainly help, or perhaps adding some clustering.

Example 2 refers to a possible direct connection between the mushroom body and the fan-shaped body ("raising the question whether the two memory centers are directly connected"). Some of the neurons directly connecting these 2 neuropils (and possible pathways for visual information in addition to reference 17), have been described already, in Li et al., 2020 (December 2020, https://doi.org/10.7554/eLife.62576), one of the recent papers based on the hemibrain dataset. Could the authors please rephrase?

https://doi.org/10.7554/eLife.62362.sa1

Author response

Essential revisions:

1) Could the authors provide substantially more detail on how an experimentalist would use the package? It should be clear why they would want to do so.

The FlyBrainLab platform has a number of capabilities that can be used by experimentalists with widely different backgrounds in computing.

a) For experimentalists with limited programming experience, the platform can be used for extensive visualization, interactive search and building of simple brain circuits of interest. The FlyBrainLab interactive capabilities only require here basic knowledge of terminology in neurobiology. The user interacts with the UI through natural language queries without the need to go through button-clicking or to learn a new, sophisticated database query language. For example, the NeuroNLP Window (see Appendix 1—figure 2) supports the construction of novel brain circuits (not just displaying individual neuron/cell types) on the morphological level of abstraction. Simple utilities enable the graphical display of connectivity diagrams (graphs). The neurons displayed can be subsequently targeted for genetic manipulation, optogenetic ablation and/or recording. As these properties are not the main focus of our manuscript, we created a notebook to guide the user through the use of English queries and illustrate its effectiveness.

b) Experimentalists with some computer background, say Python programming, have the capability to explore and analyze novel circuits as demonstrated in Figure 6. The level of computing knowledge required is akin to Matlab programming. Here, the exploration and analysis of yet unknown brain circuits may be of interest. For example, if the experimentalist has detailed morphology and/or neural activity data of a cell of interest, he/she can build with FlyBrainLab a circuit containing post and presynaptic neurons and computationally analyze the circuit connectivity and the effect of neuron ablation on its function. The same effect can often be evaluated by experimental means, say by optogenetic ablation of neurons suggested by the computational model. The same methodology can be followed through when choosing a circuit initially investigated in the literature. FlyBrainLab immediately extends the capabilities to visualize and evaluate the functionality of a chosen circuit in a larger context than previously published. In addition, the user can benefit from the Circuit Libraries written by more computationally-advanced users. Since FlyBrainLab bridges the gap between fly brain data and executable circuits, interactive computational models are much easier for an experimentalist to explore than an ad-hoc program. This is clearly reflected in the CX example depicted in Figure 3 where multiple windows synchronize morphology visualization data with circuits diagrams. Furthermore, these libraries provide an easier path for experimentalists to computationally explore the circuits under study and to validate or invalidate models using collected data. For example, users can intuitively silence neurons using an interactive circuit diagram (rather than digging into someone’s code) and compare its response to a recorded circuit where the same neuron is silenced genetically. Ultimately, the exact mechanism underlying the function of brain circuits is what is largely lacking.

c) Experimentalists with more advanced computational neuroscience background, can take full advantage of the capabilities to generate circuit diagrams, and run parallel programs with Neurokernel on GPUs. In this scenario, the experimentalist goes well beyond what is possible today on the bench. The key reason is scaling. An arbitrary number of interconnected neuropils of interest implemented by the same or different research groups can be interconnected and structurally and functionally explored, as these circuit models are also represented in the NeuroArch Database and can be easily retrieved for execution. Scaling circuits say by considering larger and larger brain regions, is clearly of paramount importance in the quest of understanding the logic of brain function. As in classical Computer Science, the question of scaling leads to deep questions of complexity. Here, the FlyBrainLab offers a platform for accelerating the discovery of the functional logic of the Drosophila brain something ad-hoc methods cannot deliver. By analogy, we can of course build cars in a garage, but in order to accelerate the building of cars, the car makers moved on long time ago to the assembly line.

To lower the bar of entry for systems and computational neuroscientists, we now provide a number of tutorials online at https://github.com/FlyBrainLab/Tutorials organized along the technical proficiency required. We are aware, however, that we do not/cannot accommodate the needs of all the users in the neuroscience research community interested in using the FlyBrainLab computing platform.

2) The manuscript must provide transparency on the data processing and

integration.

We would like to stress that FlyBrainLab is not positioned as a data provisioning platform, but rather a computing platform that provides utilities to visualize fly brain data and explore, analyze, evaluate and compare executable circuit models in a common environment.

To evaluate the transparency of the data integration and circuit execution capabilities presented here clearly requires that users have FlyBrainLab fully operational. We strongly recommend that if, for whatever reason, the reviewers run into installation problems, they contact us through the editors. In this context, we would like to emphasize that the complexity of FlyBrainLab platform is well beyond what has been typically attempted in the past computational neuroscience literature. The complexity involved is reminiscent to that of an operating system where processes running on a CPU (Neurokernel) interact with a database (NeuroArch) and are flexibly invoked by the UI through NeuroMynerva.

To further enhance the transparency of the computing platform, we included 2 new figures into the manuscript:

a) Figure 2 in the Introduction section gives an overview of the main components and hints at the complexity of the overall FlyBrainLab architecture.

b) Figure 6 in the Results section demonstrates the capability to effectively explore the structure and function of yet to be discovered brain circuits.

Figure 2 and Appendix 1—figure 1 and 2 help clarify the complexity of the FlyBrainLab architecture. Understanding the underlying design choices of the systems architecture of FlyBrainLab was previously detailed in papers describing the NeuroArch (Givon et al., 2015: http://dx.doi.org/10.5281/zenodo.44225) and Neurokernel (Givon and Lazar, 2016: https://doi.org/10.1371/journal.pone.0146581) components. The NeuroMynerva user-side front end, built on top of JupyterLab is new. We are avoiding listing more details here as they are too technical. They are of course available on Github.

We included now a number of tutorials to help the user take advantage of the Circuit and Utility libraries. Detailed notebooks provide users with an overview and a number of examples how to invoke the libraries. For example, we published the CXcircuits Library at https://github.com/FlyBrainLab/CXcircuits, and included notebooks for each of the three CX models analyzed in the main text. In addition, the notebooks and libraries of the other figures/results will be published once the paper is accepted. They will be listed in the “Publications and Talks” and ”Libraries” sections of the FlyBrainLab Wiki page https://github.com/FlyBrainLab/FlyBrainLab/ wiki/FlyBrainLab-Resources.

Although we do not consider the content of the datasets as being central to capabilities of FlyBrainLab as a computing tool, we welcome the use of our platform with (not instead of) other data provision platforms and have provided examples of how FlyBrainLab can be used with new datasets (https://github.com/FlyBrainLab/Tutorials/blob/ master/tutorials/swc_loading_tutorial/swc_loading.ipynb) However, in order to demonstrate the capabilities of FlyBrainLab to work with a wide range of datasets, we’ve included several publicly available datasets (listed and tracked in the FlyBrainLab Wiki) in the default FlyBrainLab package, which have been loaded into the NeuroArch database with minimal changes (apart from formatting edits for compatibility with the NeuroArch schema, which is specified in Givon et al., 2015 (http://dx.doi. org/10.5281/zenodo.44225)). Notebooks for loading these datasets into NeuroArch databases are provided in https://github.com/FlyBrainLab/datasets.

3) The package would be far more user-friendly if it had much simpler installation. Detailed instructions would help too.

We substantially improved upon and taken the installation process to the next level.

a) We corrected, and significantly expanded the installation instructions at https: //github.com/FlyBrainLab/FlyBrainLab, and added a Wiki page for troubleshooting (https://github.com/FlyBrainLab/FlyBrainLab/wiki/Troubleshooting).

b) We now provide multiple installation options for users of different expertise: 1) using a script, 2) using our docker image, and 3) using an Amazon AWS machine image. All these options typically only require a single command line for installation and a single command line for starting the application. We also clearly listed the system requirements for each installation option. Dependencies are fully described in the installation scripts.

c) We tested the installation procedure on Linux (Ubuntu and CentOS), macOS and Windows.

d) We asked a diverse range of users, including colleagues, collaborators, undergraduate students to test the installation and received positive feedback.

e) We created a number of “get started” tutorials to guide through the basic features of the system (https://github.com/FlyBrainLab/Tutorials/tree/master/ tutorials/getting_started).

f) We now provide users more information about available resources (https:// github.com/FlyBrainLab/FlyBrainLab/wiki/FlyBrainLab-Resources), including i) a list of the latest version of the components, and ii) a list of publicly available datasets that we loaded into the NeuroArch database.

4) Users would benefit from a process to keep the packages up-to-date, such as for the “hemibrain” module.

For installation purposes, we have published and will keep updating a list of the latest versions of all FlyBrainLab components https://github.com/FlyBrainLab/ FlyBrainLab/wiki/FlyBrainLab-Resources#repositories. The installation scripts provide the version number of the dependencies whenever applicable.

For datasets loaded into the NeuroArch Database, we

a) Provided a Datasets Version Tracker https://github.com/FlyBrainLab/datasets, that also includes the latest Hemibrain dataset,

b) Published the code/notebooks used to load the NeuroArch Database with different datasets, thereby helping users to load the NeuroArch Database independently of the main developers,

c) Will periodically and timely update the database, and welcome community contributions.

In addition, the reviewers have provided many helpful comments to help the authors with their revision.

Reviewer #1:

This manuscript outlines the FlyBrainLab platform, which brings together a number of software packages from the authors to provide a unified interface for viewing data and simulating neuronal activity related to Drosophila.

The application is well described and examples of its use are given. The code for the application components is open source and installation instructions and documented are provided. The suite of components clearly work well together providing a very good example of a user focussed computational neuroscience application for working with advanced data and models.

1) While much of the technical/implementation detail is reserved for the Materials and methods section, the main body of the manuscript would benefit from a high level diagram of the structure of the application (like Supplementary figure 1, or even simpler), or a table defining/summarising the various components mentioned in the main text (NeuroMinerva/CxCircuit/NeuroArch etc.) and how they related to each other.

Thank you for the suggestion! We added a new figure (Figure 2) to the manuscript to provide an early overview of the main FlyBrainLab components, i.e., NeuroArch, Neurokernel and NeuroMynerva. As suggested by the reviewer, Figure 2 is a simpler version of Appendix 1—figure 1. It provides an overview of the main datasets currently available in NeuroArch, and the circuit execution capabilities supported by Neurokernel. To put the FlyBrainLab components in a better context, we also added text relating the levels of abstraction underlying Figure 2 and Appendix 1—figure 1.

Reviewer #2:

FlyBrainLab by Lazar et al. provides the ability to set up, execute, and analyze Drosophila neural circuits, while integrating/exploring connectomics data, in a single platform. Such a unified framework has the potential to advance our understanding of the functional logic of the fly brain. The authors show that FlyBrainLab tools can be used to execute models developed previously in the literature. What is missing, however, is a clear demonstration that the platform can be used for de novo exploration and guidance on how the tools offered by the platform will enable new discoveries. In particular, the case is not made that using this library provides an easier path to discovery than the normal ad-hoc approach. The work does not fully describe what a user needs to do to deploy it for their own studies, nor does it clearly show how its own examples were generated.

We thank the reviewer for the constructive comments on the manuscript. At the end of the Results section, we now present several examples and ways for exploring an unknown circuit by expanding upon the Supplementary figure 3 of the original submission.

What FlyBrainLab offers may not be an easier path to discovery, but certainly a faster one. What does a car assembly line provide? Does one still need to design a car? Yes, and it’s a difficult problem. But assembly line makes the production much faster. Similarly, FlyBrainLab provides an essential workflow for building circuits from data, visualizing circuits, comparing existing models within the same platform, interactively manipulating circuits, creating new models that can all be completed within a single environment that provides the much needed integration to accelerate discoveries. Furthermore, we have substantially expanded the tutorials https://github.com/FlyBrainLab/Tutorials, specifically https://github.com/FlyBrainLab/Tutorials/tree/master/tutorials/getting_started to get users started with all aspects of the FlyBrainLab. Notebooks for the CX example have been published, and the notebooks for the other results in the manuscript will be published once the paper is accepted for publication.

The reviewer seems to suggest that comparison between models is not a novel exploration of circuit function. We argue the opposite. Comparison of models in the literature is one of the keys for an in-depth understanding of the exact assumptions, structure and function of the published models, operating under the same setting/environment. This is how fields like machine learning, signal processing, computer vision thrive, where comparisons and being able to run every line of code is a standard practice. However, this is not the norm in systems neuroscience and computational neuroscience, with very few exceptions (see this paper https://doi.org/10.1523/JNEUROSCI.3374-12.2013 for an example). In the two examples involving early olfactory circuits, we do not just “execute models developed previously in the literature”. Rather, the models considered have been adapted to new contexts to explore the function of the circuits under consideration, either due to the more precise connectivity information brought by the Hemibrain dataset, or due to a downsizing of the larva circuit. We consider these also as de novo exploration of the functional logic of the underlying circuits.

1) Across the circuit examples supplied in the manuscript, it is not clear what features need to be manually coded up for the particular circuit/question of interest vs. what features can be pulled from FlyBrainLab and directly used. At present, the discussion of the different libraries in the supplement lists capabilities, but there is no guidance or examples of how the libraries can be used in practice. We could not find documentation for CXcircuits, EOScircuits, and MolTrans online. Similarly, the supplementary video illustrates the interactive capabilities of the platform, but the manuscript does not guide the user in replicating these capabilities on their own. To fix this, we advise the authors to include the notebooks used to generate all of the figures/analysis in the main results as supplementary files, with detailed annotation so that a user can use them as starting points for their own analyses.

We thank the reviewer for this suggestion. To address the concern raised, we published the CXcircuits Library at https://github.com/FlyBrainLab/CXcircuits, and included notebooks for each of the three CX models analyzed in the main text. The notebooks are mentioned in the revised manuscript. To avoid duplication, we chose to publish the code directly on GitHub instead of using supplementary manuscript files. In this way, the code can constantly be updated to include new features and users can benefit from the most up-to-date version. We also note that the exact commit provides access to the code for reproducing the figures/results of the manuscript.

In addition, the notebooks and libraries of the other figures/results will be published once the paper is accepted. They will be listed in the “Publications and Talks” and ”Libraries” sections of the FlyBrainLab Wiki page https://github.com/FlyBrainLab/FlyBrainLab/wiki/ FlyBrainLab-Resources.

We would like to take this opportunity to further clarify the current division of the roles taken by FlyBrainLab platform and its Libraries.

The main components of the FlyBrainLab, as described in the Appendix 1, provide core functionalities required for data storage/retrieval, visualization, user interface and code execution of brain circuits. Examples of capabilities that users can directly invoke are given below:

1) Query using plain English and 3D graphics for building and visualizing brain circuits;

2) Retrieval of connectivity of the brain circuit built/visualized;

3) User interface and API for circuit diagram interaction;

4) Specification of models for each circuit components.

5) Execution of the circuits represented/stored in the NeuroArch Database.

The Utility Libraries provide tools for 1. analyzing data that are retrieved using the core FlyBrainLab functionality, 2. creating circuit diagrams semi-automatically.

The Circuit Libraries are built on top of the core FlyBrainLab functionality and provide tools to study functions of a specific brain region/circuit. An analogy of the relation between Circuit Libraries and the FlyBrainLab platform is that between the toolboxes and core functions of Matlab. In Matlab, the former provide high-level, application-specific functionality realized with some of the core built-in functions (such as a digital signal processing toolbox). In other words, the Circuit Libraries are examples of how the features of FlyBrainLab can/should be used.

2) More must be included in the manuscript to describe how the tool can be used for exploratory analysis. Consider including a simple annotated code walkthrough that, starting with some list of neurons, perhaps from the Hemibrain, answers what utilities are available/what code is needed to visualize neuron morphologies, what code is needed to generate an interactive circuit diagram, what code is needed to set up a simple leaky integrate and fire model, what is needed to execute a circuit, and whether resultant firing rate outputs look reasonable. The panels in Supplementary figure 3 are close, but they show the results of the above workflow, and there is no demonstration on how one can get there. Such an example need not (and perhaps is better not to) focus on a well-characterized circuit. The simple examples found in FlyBrainLab/Neuroballad are promising.

Thank you for your suggestion. We substantially expanded upon the exploratory example in Supplementary figure 3 and moved it to the end of the Results section. It appears now as Figure 6 in the subsection entitled “Exploring the Structure and Function of Yet to be Discovered Brain Circuits”. Three examples are described that demonstrate the use of the Utility Libraries in the quest of novel discoveries.

Furthermore, we have substantially expanded upon the online documentation and tutorials. We have added notebooks showing examples in each of the main steps of the workflow, including:

1) How to ask questions in English to obtain a list of neurons of interest (including visualizing their morphology) and how to build upon an initial set of neurons,

2) How to generate circuit diagrams,

3) How to interact with the circuit diagram layout, build a model and execute it, obtain the result and plot the response.

We will continue to release more examples in the future.

3) More work can be done to lower the barrier of entry for FlyBrainLab. Even as a researcher with a few years of Python experience that is currently using the Hemibrain to set up, run, and analyze neural circuits, I had difficulty installing FlyBrainLab and knowing what steps to take to replicate the examples shown in the manuscript. In particular, the installation instructions seem inconsistent/not fully developed on https://github.com/FlyBrainLab/FlyBrainLab. It took hours to figure out which instructions to follow to end up with a Jupyter Lab configuration that resembles the supplementary video, with a notebook, a morphology viewer, and a circuit viewer in the same window. The installation instructions within NeuroMinerva, built on JupyterLab version >2, helped get me to that point, but the instructions on FlyBrainLab, built on JupyterLab version <2, did not get me to that point. In addition, the "Starting Up FlyBrainLab" section on https://github.com/FlyBrainLab/FlyBrainLab should have material on what to do if you do not see an FFBO section or cannot run the example notebook, perhaps in some troubleshooting page.

We thank the reviewer for the valuable feedback and apologize for the confusion. We substantially improved upon the installation process:

1) We corrected, and significantly expanded the installation instructions at https:// github.com/FlyBrainLab/FlyBrainLab, and added a Wiki page for troubleshooting (https://github.com/FlyBrainLab/FlyBrainLab/wiki/Troubleshooting).

2) We tested the installation procedure on Linux (Ubuntu and CentOS), macOS and Windows.

3) We have already published and will keep updating a Docker image that has the full FlyBrainLab installed (https://hub.docker.com/r/fruitflybrain/fbl). We also included an Amazon Machine Image to be used on the AWS EC2 service https://github.com/ FlyBrainLab/FlyBrainLab#14-amazon-machine-image. Additional images for other cloud services will be provided upon request.

4) We asked a diverse range of users, including colleagues, collaborators, undergraduate students to test the installation and received positive feedback.

Reviewer #3:

Lazar and colleagues present a platform, FlyBrainLab that integrates Drosophila neuron and circuit modelling data with neuroanatomy, from morphology to synaptic resolution information. Their desktop system is modular and stand-alone, providing the ability to query, run and visualise particular circuits and models. To demonstrate the functionality of their platform they present 3 specific examples that cover the use of published models, light and electron-microscopy (EM) data and the comparison between larva and adult.

Although the need their platform is addressing is real, the manuscript does not present the work in a compelling way, particularly for this journal's audience. Furthermore, the methods used to integrate data, and how data are used are not described properly. If a system such as this aims to become a standard analytical tool for neuroscientists, it is essential that data integration and processing are transparent.

We thank the reviewer for providing this perspective and we would like to stress that FlyBrainLab is not positioned as a data provisioning platform, but rather as a computing platform that provides utilities to visualize fly brain data and explore, analyze, evaluate and compare executable circuit models in a common environment. We’d like to clarify that we consider data curation, integration and processing as porting currently publicly available datasets into the NeuroArch database, and we fully respect the expertise of researchers generating and curating fly brain data. As such, apart from formatting changes for consistency with NeuroArch database’s schema, minimal changes are done for any data saved to and loaded from the NeuroArch database into the Neurokernel Execution Engine. Although 6 default datasets are provided as starting points for users, the FlyBrainLab platform was developed to be agnostic of the content of the underlying datasets. We welcome the use of any additional datasets that can be loaded locally on machines running FlyBrainLab without communicating with our publicly hosted data servers. We do acknowledge, however, that in addition to providing the reference to the previous NeuroArch publications, transparency in terms of how data is saved to and loaded from the NeuroArch database could be make clearer. To that end, we’ve done the following:

1) We published the code used to create the NeuroArch database, which can be used directly or indirectly and independently of us to load new datasets and update upstream datasources,

2) We have updated the latest version of NeuroArch database that provides the Hemibrain version 1.1, and

3) We published a webpage tracking versioning of NeuroArch database.

Please find below a number of concerns. I do not comment on the technical details of the FlyBrainLab platform modules, as that is not my expertise.

1) The structure of the manuscript and the way the examples are presented are not compelling for the average neuroscientist that wants to start using the public data (models, connectome and synaptome). Especially if the one of the main draws of this type of platform is for neuroscientists to start testing models based on real data. The main reason for this is that very little information is given on how experimental data is curated and integrated (see below for more).

We thank the reviewer for this feedback. While, as we already mentioned, we do not curate datasets, we included additional tutorials and examples of how individual components of the FlyBrainLab platform can be used to access/visualize/manipulate data types such as neuroanatomical connectomics/synaptomics. We also note that in the example shown in Figure 4 of the Results section, the workflow employed for modifying a previous (FlyCircuit-based) model of the Antennal Lobe to reflect new public (Hemibrain-based) data is intended as an illustration of how models can be updated using more recent (connectomics/synamptomics) data. Furthermore, we would like to clarify that the purpose of the FlyBrainLab platform is to provide an integrated system to experiment with fly brain data that are publicly available worldwide. The developers of FlyBrainLab do minimal modifications to the datasets apart from porting them into the NeuroArch database using a specified schema (as specified in Givon et al., 2015 (http://dx.doi.org/10.5281/zenodo.44225)). We do acknowledge the need for clearer examples of how newly published data can be integrated into the database. For end-users who intend to use other datasets not currently provided as default in FlyBrainLab, we included additional example notebooks detailing how such datasets can be loaded into the FlyBrainLab. Finally, although not generally recommended, we note that components in NeuroMynerva (the user front-end) can be independently invoked without communicating with the database. The neuroanatomy visualizer (Neu3D-Widget), for example, can load arbitrary swc or mesh files, where the corresponding 3D data can be accessed in the associated python kernel once the files are loaded into the widget. An example notebook (link) has been added to highlight this use case as well.

2) What neuroanatomical data is being used and in what way is completely opaque. It is assumed that different modalities of data will have been processed in different ways, but very little information is given in this regard. How is the light-level FlyCircuit data processed to infer connectivity and how is this process validated? How are cell types identified and validated, in FlyCircuit and the hemibrain? How many neurons and types are used for each use case?

For example, regarding the CX circuit example, the authors say, "The innervation pattern of each neuron was visually examined in the NeuroNLP window and a standard name assigned according to the naming scheme adopted in the CXcircuit Library." How do these standard names relate to the cell type names used by the community? Identifying cell types from morphological data requires expertise when this is done to the highest resolution, and thus this process should be described in detail. In addition, it becomes very difficult to assess the use cases presented when there is no clarity on what neurons and types are being used.

We believe that two types of questions are raised above by the reviewer. One relates to how the anatomical data from the original dataset are processed and stored, and the other relates to how the data is interpreted from a model’s perspective and used in modeling.

To address the first type of questions, we would like to reiterate the position we take on connectome data. We do not generate connectome data, including the morphology of the neurons and their connectivities, nor are we in the position to identify large quantities of cell types. The utility that the platform provides is the APIs to read/write the NeuroArch Database. We do not provide any additional interpretations of these datasets. For transparency, we now provide the code that we used to create NeuroArch labeled datasets from their original source (see also the response to your comment #3). For the Hemibrain dataset, the cell types and names of individual neurons have always been assigned according to the original dataset, along with a reference ID pointing to the ID used in the original dataset. For the FlyCircuit dataset, we included original neurons from FlyCircuit 1.2 (http://flycircuit.tw), and the inferred connectivity according to the algorithm published and made available to us by the authors (https://doi.org/10.3389/fninf.2018.00099 now cited in the Data Availability section). For the Larva L1EM dataset, we included detailed description of how the publicly served dataset (https://l1em.catmaid.virtualflybrain.org) is loaded into the NeuroArch Database (https://github.com/FlyBrainLab/datasets#README.md), including a CSV file that shows the mapping between original neuron labels in the raw Dataset to the labels used in FlyBrainLab. For transparency, IDs of the neurons of the original data source are always available/displayed in the Info Panel (see also Appendix 1—figure 2).

To address the second type of questions, we would like to note the following. First, even though a substantial amount of hard work has been put into annotating datasets by their original creators, often there may be missing data/labels, and some labels are simply not really useful. In order to create an executable circuit, a user may need to make additional assumptions to assign labels/names/types. One example is provided by the FlyCircuit dataset which contains no cell type information. Additional information must be brought in from the literature by a user or he/she needs to make further assumptions. Second, the naming scheme provided by the original dataset or even in the literature is not the best possible.

For example, as the reviewer pointed out, the standard naming scheme we used in the CX example is not the same as the one used by the research community (we also have to point out that there are many names used by the community for these neuron types, and depending on the researcher, the names used appear to be largely random). We have adopted a naming scheme (in the original submission) that is both human readable and easily machine-parsable. The latter property has never been the focus of naming schemes used by neurobiologists but is critical when it comes to specifying neurons for code execution. These two points highlight the need for flexibility in processing publicly available data. The FlyBrainLab provides users full access to the NeuroArch Database to update any of the neuron’s information/metadata as desired.

Finally, we would like to address the concern raised in the last sentence in the reviewer’s comment. The types of neurons in the CX example are already provided in the Materials and methods section, including in Figure 7 (in the revised manuscript). We also corrected our statement on visually examining these neurons. The visual examination was aided by the published neuron types in the paper (https://doi.org/10.1016/j.celrep.2013.04.022). For the neurons in the FlyCircuit dataset but not mentioned in the paper, we made assumptions in modeling according to the available evidence in the literature. The details are omitted here as they are largely out of scope. In the revised manuscript, however, we added further information on each individual neuron modeled. Similarly, the types of neurons and the number of neurons of each type is explicitly mentioned in the two examples of the early olfactory system.

3) Related to the point above, the authors list the hemibrain data used is from version 1.0.1 (gs://hemibrain-release/neuprint/hemibrain_v1.0.1_neo4j_inputs.zip). However, a new version of the data (1.1) was released online in May, with the data dumps available at least from the end of June (according to https://dvid.io/blog/release-v1.1/). The latest version significantly improves the cell typing that had been released (see https://docs.google.com/document/d/1vae3ClHR8z8uekqwrOHtqiux3oY5-Y_xw6W2srCi3PI/edit?usp=sharing). The authors should update their manuscript to use the latest version of data. This should highlight issues of how data can be kept up to date in these types of platforms and how integration of versions can be achieved. The authors should comment on the processes they use for this.

We thank the reviewer for this comment. Again, as mentioned in the responses to earlier questions, the main purpose of this manuscript is to describe the FlyBrainLab as a platform of which the NeuroArch database, not an individual dataset, is a critical component. Incidentally, we presented a usage case in which the Hemibrain dataset version 1.0.1 is the main source of data. This should bear no difference in showcasing the capabilities of the FlyBrainLab than using version 1.1. Therefore, we assert that there is no need to update the database in the example we presented in the paper.

To benefit the community, however, we do feel the need to periodically and in a timely fashion update the database for the purpose of general usage. We did the following: (1) we have updated the latest version of the NeuroArch database and currently provide the Hemibrain version 1.1, (2) we published a webpage tracking versions of the NeuroArch database, and 3) we published the code used to create the NeuroArch Database. The code can be used directly or indirectly and independently of us once an update of the upstream datasource is available.

4) Presenting this platform as a Resource, it becomes essential that it is easy to install. I attempted to install FlyBrainLab according to the instructions in https://github.com/FlyBrainLab/FlyBrainLab. Using miniconda on macOS, which I already had installed for other purposes, I unfortunately ran into errors, and the installation was unsuccessful (seemingly caused by msgpack not being found). The instructions mention that the platform has only been tested in Ubuntu but that it "should work" in other platforms. I understand that it is not possible to test for and avoid, all possible errors, but the authors should test the installation in at least one other OS, if they want the average neuroscientist to start using it.

The tutorials listed in https://github.com/FlyBrainLab/Tutorials are certainly a very useful introduction, although they suffer from the issues in points 2 and 3.

We thank the reviewer for the valuable comment. To address the concern, we did the following:

1) We significantly expanded the installation instruction on https://github.com/FlyBrainLab/FlyBrainLab, added a Wiki page for troubleshooting

(https://github.com/FlyBrainLab/FlyBrainLab/wiki/Troubleshooting), and mentioned that the Issue Trackers on GitHub can be useful in this case (we also understand that the reviewer needs the remain anonymous).

2) We tested the installation procedure on Linux (Ubuntu and CentOS), macOS and Windows.

3) We have already published and will keep updating a Docker image that has the full FlyBrainLab installed. We also provide an Amazon Machine Image to be used on the AWS EC2 service. Additional images on other services can be provided if requested.

4) We asked a diverse range of users, including colleagues, collaborators, undergraduate students to test the installation and received positive feedback.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

The revised manuscript has addressed some of the technical issues, but has not addressed the core issues of readability of the manuscript , and usability of the software, by a regular fly neurobiologist. This was stated in the essential revisions, point 1: "1. Could the authors provide substantially more detail on how an experimentalist would use the package? It should be clear why they would want to do so."

While the authors have responded with some limited explanations in the cover letter, the required changes are not evident in the manuscript, and it is there that these essential points of usability must be clarified. Again, it is not sufficient to refer the reader to the website to do this. The appendices, and much of the text, still mostly tell the reader what can be done, rather than how to do it. This should be rather early in the manuscript to motivate what follows.

Thank you for the clarification. In this revision, we substantially expanded the text to address the readability issue of the manuscript and how an experimentalist can use the platform. We added in the Results section two more examples showing the type of questions FlyBrainLab can be used effectively to answer questions raised by a regular fly neurobiologist, and in the Materials and methods section the steps needed to answer these with FlyBrainLab.

Specifically, we added in the Results section the entry “Building Fly Brain Circuits with English Queries”, showing the versatility of the English query interface of FlyBrainLab, also technically known as NeuroNLP. NeuroNLP enables users, without any programming knowledge, to perform complex queries to build, visualize and explore biological circuits, a capability that none of the current data provisioning services is designed to do or can provide to neurobiologists/neuroscientists. In the corresponding part in the Materials and methods section, the English queries employed are listed in full detail.

We also moved a part of the text previously included in the supplement/appendix to the new entry entitled “Exploring the Structure and Function of Yet to be Discovered Brain Circuits” in the Results section. Here, we provided several examples on how to analyze connectome/synaptome datasets using FlyBrainLab to identify structures and cell types, and create circuit diagrams modeling brain pathways. We describe the steps to achieve these results in the corresponding part of the Materials and methods section.

In the newly added entry “Interactive Exploration of Executable Fruit Fly Brain Circuits” in the Results section, we present the construction of an interactive circuit diagram for rapidly developing circuit models. The capability to remove or reenable a neuron in the circuit diagram is akin to, respectively, silencing and rescuing neurons in an experiment, and is of particular interest to systems neurobiologists. Such a capability quickly enable the exploration of biological findings by means of computational models, and it is highly flexible and scalable beyond the typical experimental settings.

Finally, we added Appendix 4 with a walk through of code highlighting the main capabilities of the FlyBrainLab regarding model creation and circuit execution, including: (1) loading the NeuroArch Database from connectome datasets, (2) building and exploring biological circuits, (3) interactively exploring circuit diagrams, (4) execution of circuits retrieved from the NeuroArch Database.

Concluding, the manuscript now comprehensively describes how neurobiologists and computational neuroscientists alike can leverage the power of FlyBrainLab, whether they want to simply visualize neural circuit through complex English queries, analyze the connectivity data, or construct executable circuit models for exploration, analysis, comparison and evaluation.

Similarly, on essential point 2, the reviewers would like to know how their data goes in and is manipulated, and how they can be confident that what the progam does is faithful to the original. Issues of installation, which have been presented, are relevant, but of secondary technical importance.

Thank you for the clarification. We added the entry “Loading Publicly Available Datasets into NeuroArch Database” in the Materials and methods section. We provided details of how each dataset is handled using the NeuroArch API for loading into the NeuroArch Database. We also provided some high-level statistics of the loaded datasets. The scripts for loading these datasets have been published on GitHub: https://github.com/FlyBrainLab/ Datasets. In addition, in Appendix 4, we now provide a complete walk through some of the core FlyBrainLab capabilities with a simple example, including data loading. Some basic usages of data loading are exemplified.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Essential Revisions:

The reviewers and I felt that the paper and FlyBrainLab provide a userful resource for the field. The revised version is considerably improved and the reviewers would like to suggest a few essential but straightforward revisions to make it even more accessible to the readers and users of this resource.

1) Update key references (indicated in the detailed reviews).

We updated the references according to the suggestions of reviewer #3. We also checked and updated references to preprints of other peer-reviewed publications.

2) Clarify Figures and their legends, especially Figure 2 and 3.

We updated Figure 2 and Figure 3 according to the suggestions from the reviewers.

There are several further important suggestions by the reviewers to strengthen the presentation and improve the accessibility of the paper and resource for readers. These are provided in the detailed reviewer comments below.

Reviewer #1:

This new version of the manuscript has a better layout and will be a more useful introduction to the application for new users. However there are still some issues with how the structure of the application is presented which may be difficult for readers.

Figure 2, especially the legend is quite minimal and there is nothing here to give a reader the key idea that this is a graphical application which a user would interact with through their browser. I suggest to move the screenshot of the application from Appendix 1—figure 2 to a panel in the main Figure 2, and make sure these panels are well integrated and explained, e.g. NeuroMynerva in the top panel is what you see in the bottom. Refer the reader here to Appendix 1—figure 1 for more details (some of the colors of the blocks match between the simplified/full versions, e.g. green NeuroArch, there's no reason they all shouldn't for ease of readability).

We updated Figure 2 as well as expanded on its caption as suggested.

NeuroNLP and NeuroGFX (window) are mentioned in the text without any context. These need to be shown/explained in Figure 2 and also described briefly where NeuroMynerva etc. are first defined in the Introduction. I would suggest highlighting all important component names in bold where they are first introduced so a user can go back to the definition as they are discussed later in the text.

As suggested, we highlighted the component names with bold font at their first instance.

It is strange that the actual short English language queries used for Figure 3 are not mentioned in the legend or main text. This is an important feature of the application and adding (at least some of) the sequence of commands for one of the panels (e.g. 3a, "show T4a", "color red", "add cholinergic presynaptic neurons" etc.) in another panel/table in the figure would be quite informative for readers. In the main text "(see also Materials and methods)" could be replaced with something better like: (the full sequence of queries which created this panel can be found in the Materials and methods). Also explain that the panels in Figure 3 are screenshots of the NeuroNLP window in Figure 2B, etc.

We added English queries in the first example (Use Case 1) and refer readers to the Materials and methods section for a full sequence of queries for the rest of the examples.

It is good having a section in the Materials and methods for each of other figures related to the main use cases/examples, but these could be tied together better also, making it clearer that the details of how the figure panels were generated can be found in the Materials and methods. Also some parts of the Materials and methods do not refer back to the figures, e.g. "Model A [26], Model B [27] and Model C [28]" could refer to Figure 6A, B, C etc. Small things like this would improve the readability of the paper significantly.

It might also be worth numbering the use cases/analysis types, e.g. Use Case 1-6, and adding these to subheadings to make it easier to move between the main text and Materials and methods.

We added Use Case 1-6 to each of the subheadings in the Results as well as in the Materials and methods section.

Overall the manuscript is a good introduction to the range of features FlyBrainLab offers and is structured such that a user can see what can be accomplished, and is given some guidance how they would achieve it themselves.

Reviewer #2:

The text is clearer and more inviting for a general audience. The enumeration of capabilities in the Introduction is effective. The Results section is structured well, displaying different use cases of FlyBrainLab. The accompanying tutorials online serve as good launching points for researchers.

Thank you to the authors for the additions in the main text, the code walkthroughs in the appendices, and the improved installation instructions. The basic tutorials are simple to follow. My only suggestion on the code side is to be more verbose in the introduction to the lamina cartridge executable circuit notebook and in the limitations of the user-side-only installation.

We thank the reviewer for the valuable comments and suggestions. We added more detailed instructions in the lamina cartridge tutorial, in particular, on the exact steps for starting the backend servers and creating a workspace so that the code can be readily executed. We further clarified the limitations of the user-side-only installation in the Code Availability and Installation section in Materials and methods .

Reviewer #3:

The revised version of the manuscript addresses many of the concerns previously reported. Thank you to the authors for providing much clearer information about the data that is ready to use in the FlyBrain Lab platform, how it can be used, installed and the components of the FlyBrainLab. There are still some corrections that are needed regarding the source of some of the datasets.

Throughout the paper, reference 4 (Xu et al., 2020) is used as the citation for the hemibrain dataset. This is a preprint that has been superseded by the publication in September 2020 of the peer-reviewed paper (Scheffer et al., 2020, https://doi.org/10.7554/eLife.57443). It also needs updating in GitHub (https://github.com/FlyBrainLab/Datasets#ref-1)

As requested, we updated all citations to the paper above.

The reference to the larval L1EM dataset also needs correcting. For example this is given as reference 2 (Berck et al., 2016). The correct reference, as correctly shown in https://github.com/FlyBrainLab/Datasets#ref-3, is Ohyama et al., 2015 (reference 69). There might be other instances in the text that use the wrong citation.

We corrected the citation and checked to make sure that the reference is cited correctly in the rest of the manuscript.

The section added to the beginning of the Results, which includes Figure 3, provides readers with some examples on how they can start exploring the data in the platform (published datasets) using plain English queries. However, I do not think the added Figure 3 currently presents the data in a way that makes it easy for readers to link the relevant text and figure legend that describe the connectivity, to the panels. Each of the 4 examples (a-d) displays a neuron plot (left) and a connectivity matrix (right); other than reading each of the row/column names it is not possible to link the neurons plotted on the left to the data plotted on the right. Adding a colored annotation bar or even coloring the row/column names of the connectivity matrices according to the neuron plots would certainly help, or perhaps adding some clustering.

In the new Figure 3, we matched the color of row/column neuron names in the adjacency matrix to the color of the visualized neurons. Note that in the interactive user interface, each neuron can be highlighted/addressed by their name. It is, however, not possible to reflect this feature in the printed version of the manuscript.

Example 2 refers to a possible direct connection between the mushroom body and the fan-shaped body ("raising the question whether the two memory centers are directly connected"). Some of the neurons directly connecting these 2 neuropils (and possible pathways for visual information in addition to reference 17), have been described already, in Li et al., 2020 (December 2020, https://doi.org/10.7554/eLife.62576), one of the recent papers based on the hemibrain dataset. Could the authors please rephrase?

We rephrased the paragraph and referenced the paper.

https://doi.org/10.7554/eLife.62362.sa2

Article and author information

Author details

  1. Aurel A Lazar

    Department of Electrical Engineering, Columbia University, New York, United States
    Contribution
    Conceptualization, Resources, Formal analysis, Supervision, Funding acquisition, Investigation, Methodology, Writing - original draft, Project administration, Writing - review and editing, Conceived the study and FlyBrainLab software architecture. Developed comparative models of the central complex. Developed comparative models of the early olfactory system.
    For correspondence
    aurel@ee.columbia.edu
    Competing interests
    No competing interests declared
    Additional information
    The authors’ names are listed in alphabetical order.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4261-8709
  2. Tingkai Liu

    Department of Electrical Engineering, Columbia University, New York, United States
    Contribution
    Conceptualization, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing, Conceived the study and FlyBrainLab software architecture. Developed the FlyBrainLab platform. Developed user-side libraries. Developed comparative models of the early olfactory system.
    Competing interests
    No competing interests declared
    Additional information
    The authors’ names are listed in alphabetical order.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3075-7648
  3. Mehmet Kerem Turkcan

    Department of Electrical Engineering, Columbia University, New York, United States
    Contribution
    Conceptualization, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing, Conceived the study and FlyBrainLab software architecture. Developed the FlyBrainLab platform. Developed user-side libraries and utility libraries. Developed comparative models of the central complex. Developed comparative models of the early olfactory system.
    Competing interests
    No competing interests declared
    Additional information
    The authors’ names are listed in alphabetical order.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9273-7293
  4. Yiyin Zhou

    Department of Electrical Engineering, Columbia University, New York, United States
    Contribution
    Conceptualization, Software, Formal analysis, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing, Conceived the study and FlyBrainLab software architecture. Developed the FlyBrainLab platform. Updated the server-side components of the existing FFBO architecture. Developed comparative models of the central complex.
    Competing interests
    No competing interests declared
    Additional information
    The authors’ names are listed in alphabetical order.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4618-4039

Funding

Air Force Office of Scientific Research (FA9550-16-1-0410)

  • Aurel A Lazar

Defense Advanced Research Projects Agency (HR0011-19-9-0035)

  • Aurel A Lazar

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

The research reported here was supported by AFOSR under grant #FA9550-16-1-0410 and DARPA under contract #HR0011-19-9-0035. The authors thank the reviewers for their constructive comments that significantly improved the presentation of the manuscript.

Senior Editor

  1. Ronald L Calabrese, Emory University, United States

Reviewing Editor

  1. Upinder Singh Bhalla, Tata Institute of Fundamental Research, India

Reviewer

  1. Padraig Gleeson, University College London, United Kingdom

Publication history

  1. Received: August 22, 2020
  2. Accepted: February 21, 2021
  3. Accepted Manuscript published: February 22, 2021 (version 1)
  4. Version of Record published: March 31, 2021 (version 2)

Copyright

© 2021, Lazar et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,176
    Page views
  • 243
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Cell Biology
    2. Neuroscience
    Rene Solano Fonseca et al.
    Research Article Updated

    Concussion is associated with a myriad of deleterious immediate and long-term consequences. Yet the molecular mechanisms and genetic targets promoting the selective vulnerability of different neural subtypes to dysfunction and degeneration remain unclear. Translating experimental models of blunt force trauma in C. elegans to concussion in mice, we identify a conserved neuroprotective mechanism in which reduction of mitochondrial electron flux through complex IV suppresses trauma-induced degeneration of the highly vulnerable dopaminergic neurons. Reducing cytochrome C oxidase function elevates mitochondrial-derived reactive oxygen species, which signal through the cytosolic hypoxia inducing transcription factor, Hif1a, to promote hyperphosphorylation and inactivation of the pyruvate dehydrogenase, PDHE1α. This critical enzyme initiates the Warburg shunt, which drives energetic reallocation from mitochondrial respiration to astrocyte-mediated glycolysis in a neuroprotective manner. These studies demonstrate a conserved process in which glycolytic preconditioning suppresses Parkinson-like hypersensitivity of dopaminergic neurons to trauma-induced degeneration via redox signaling and the Warburg effect.

    1. Biochemistry and Chemical Biology
    2. Neuroscience
    Lloyd Davis et al.
    Tools and Resources Updated

    Synthetic strategies for optically controlling gene expression may enable the precise spatiotemporal control of genes in any combination of cells that cannot be targeted with specific promoters. We develop an improved genetic code expansion system in Caenorhabditis elegans and use it to create a photoactivatable Cre recombinase. We laser-activate Cre in single neurons within a bilaterally symmetric pair to selectively switch on expression of a loxP-controlled optogenetic channel in the targeted neuron. We use the system to dissect, in freely moving animals, the individual contributions of the mechanosensory neurons PLML/PLMR to the C. elegans touch response circuit, revealing distinct and synergistic roles for these neurons. We thus demonstrate how genetic code expansion and optical targeting can be combined to break the symmetry of neuron pairs and dissect behavioural outputs of individual neurons that cannot be genetically targeted.