Figures and data

Image acquisition workflow
a. Flowchart of the steps needed to acquire images. µCT-guided trimming is optional but facilitates trimming and image acquisition. Shown are examples from the desert locust b. Micro-CT guided trimming is done using the FIJI plugin Crosshair (Meechan et al., 2022). The imaging face and stub face are defined manually and Crosshair provides the ultramicrotome parameters to trim the original block (left) into the correct orientation (right) as defined by the imaging and stub face planes. Example data comes from the locust brain. c. Hypothetical timeline of the sample in b, showing milestones as they can be found at variable depths in the block. d. Example of a cellular resolution overview image of the sweat bee central complex, with synaptic resolution images perfectly overlapping (green rectangles). The depth relative to the central complex is shown on the right. e. Side-by-side example of cellular resolution (40x40 nm) and synaptic resolution (10x10 nm) images showing the same region. f. Zoomed in image showing an example of two polyadic synapses (with multiple downstream partners). Arrows point to pre-synaptic density, asterisks of the same color show corresponding post-synaptic sites. g. Schematic overview of the multi-resolution imaging approach. The central complex is tiled by repeating columnar units across its left-right axis. Synaptic resolution tiles (green) are placed such that they follow the path of key computational units covering about half of the central complex. Figure 1—figure supplement 1. Comparison between cellular and synaptic resolution image data.

Estimated time gains from multi-resolution imaging approach.
Size estimation assumes raw uncompressed greyscale 8 bit image data.

vEM image alignment pipeline
a. Flowchart of general steps of the alignment pipeline, shown in detail below. Images shown were acquired from the locust central complex. For large region of interests exceeding the maximum size of a microscope’s field of view, multiple image tiles are required to be assembled into a tileset (b). These tiles have overlap necessary to find correspondences with their neighbors. To align them, a rigid offset is first computed using SOFIMA (c), before computing fine transformations with optic flow, regularized with an elastic mesh (d). The result is one contiguous image per slice with seamless transitions between tiles (e). Z alignment is then performed on stitched images by comparing them to neighboring slices along the Z axis. An affine transformation roughly aligns images (f) so that fine transformations can be computed using SOFIMA’s optic flow and mesh regularization (g). Once the entire cellular resolution overview stack is aligned along the Z axis into one coherent image stack, synaptic resolution images are first aligned in 2D, before being aligned to their corresponding slice of the cellular resolution stack. To this end, an affine transformation is computed to roughly align synaptic resolution to cellular resolution images (h) before fine alignment with SOFIMA’s optic flow and mesh regularization (i). The final output of this alignment pipeline is one cellular resolution image stack acting as a frame of reference for multiple, perfectly overlapping, synaptic resolution image stacks (j). Figure 2—figure supplement 1. Alignment of synaptic resolution data to cellular resolution; zoomed in panel j. Figure 2—figure supplement 2. Example images showing charging artifacts.

Combining manual and automatic neuron reconstruction
a. Flowchart for multi-resolution reconstruction of neuron morphology b. Example of ground-truth data manually segmented to train a 3D U-net. We used 5 cubes representing 5x5x5µm3 of image data to train our model. c. Segmentation workflow. Raw data is used to predict nearest neighbor affinity (NNA) with a trained 3D-Unet. NNA are used to compute supervoxels by watersheding. The agglomerated segmentation consists in relabeling connected components after thresholding of a segmentation graph, producing accurate neuron segmentation. d. Example of a synapse (zoomed in from raw image data in c) identified by a 3D-Unet trained to detect synaptic sites. Pre-synaptic (not shown) and post-synaptic sites are detected separately and used to compute synaptic site pairs. The right-hand side shows an example of a reconstructed and proofread neuron with detected synaptic sites. Green: accurately predicted partner and synaptic site. Yellow: accurately predicted partner with wrong synaptic site. Red: false negative, missing prediction. e. Overview of a multi-resolution neuron reconstruction. The neuron backbone is manually traced using catmaid (red), while synaptic resolution compartments are automatically segmented (green). f. The skeleton resulting from manual tracing is used to identify cell types and bridge the gap between compartments of the same neuron segmented in synaptic resolution image stacks. g-h. Illustration of proofreading using CAVE. Segmentation is fragmented into regular chunks of equal size when uploaded to CAVE (g). An existing skeleton can be used to find fragments belonging to the same skeleton, and proofreaders otherwise assemble fragments before merging them. Using CAVE (h), proofreaders can correct split errors caused by faulty segmentation, and merge errors either caused by faulty segmentation or artificial splits. Figure 3—figure supplement 1. F-scores for synapse predictions. Figure 3—figure supplement 2. Example of fused membrane artifacts.

Proof of concept reconstruction of head direction cells across insects
a. Traced EPG and PEG neurons, color-coded by the column of the protocerebral bridge they innervate. Insects: African praying mantis (Sphodromantis lineola), Madeira cockroach (Rhyparobia maderae), desert locust (Schistocerca gregaria), European earwig (Forficula auricularia), army ant (Eciton hamatum), sweat bee (Megalopta genalis). For the locust, EPG neurons matched the projection patterns in other insects (n=48), while some EPG-like neurons did not project outside of the central complex (EP neurons, n=16)(shown in dark grey). For the earwig, only right hemisphere cells traced (shown in color). Total cell count (54) was estimated by doubling cell number from the right hemisphere (27), assuming symmetry between right and left side (mirrored right neurons shown in grey). b. Phylogenetic tree highlighting the species shown in panel a. c. 3D representation of segmented EPG and PEG neurons of the sweat bee. Skeletons bridge the PB and the EB where automatic segmentation and proofreading yielded full reconstruction of neural branches in six columns of the protocerebral bridge and four columns of the ellipsoid body. Automatically detected synapses are shown below for each cell type. Presynaptic sites, red; postsynaptic sites, blue. d-e. Connectivity graphs showing recurrent circuits between EPG and PEG cells in the protocerebral bridge and the ellipsoid body of the sweat bee. Skeletons were used to match reconstructions across compartments, enabling detection of recurrent circuits across distant high-resolution image volumes. f-g. Synapse distribution (f) and number of cells (g) per column across neuropils for EPG (upper row) and PEG neurons (lower row) for the sweat bee. Note that low number of synapses in columns L7 of the protocerebral bridge, and C3 and C4 of the ellipsoid body are due to incomplete reconstructions at the boundary of synaptic resolution image stacks. Picture credits in panel a: Chris Dlouhy (Rhyparobia maderae), Daniel Kronauer (Eciton hamatum), and Ajay Narendra (Megalopta genalis). Picture of Forficula auricularia by Georg Ekhöft, Observation.org (https://observation.org/photos/109145418/), CC BY-NC 4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0). Figure 4—figure supplement 1. Side-by-side comparison of EPG and PEG neuron in the protocerebral bridge.

Zoomed in comparison between cellular and synaptic resolution image data.
Images extracted from the same region of the protocerebral bridge of Megalopta genalis, illustrating the correspondence between cellular resolution (40 x 40nm pixel size, left) and synaptic resolution (10 x 10nm pixel size, right)

Alignment of synaptic resolution data to cellular resolution images.

Example images showing charging artifacts.
Images of the locust dataset illustrating the effect of charging on the image data. Panel a shows dramatic charging that made alignment impossible. Alignment was possible in panel b despite contrast difference because charging was minimal.

F-scores for synapse prediction.
F-scores were computed for each neuropil dataset in Megalopta genalis for three kinds of image quality decreasing from “Best” to “Moderate”. Some images with no synapses were presented and expected to not have any prediction.

Example of a hole in cell membrane causing false merges in image segmentation.
Image showing holes in the membranes of three neurons (arrows) that caused a false merge with automatic segmentation. Multiple neurons are initially assigned the same label as shown by their color. Some neurons do not appear to have holes but are likely connected outside of the region shown. After performing corrective splits via CAVE, the segmentation accurately reflects neuron morphology.

Side-by-side comparison of EPG and PEG neuron in the protocerebral bridge.
Reconstruction of protocerebral bridge branches of an EPG (right, red) and a PEG (left, blue) neuron. Pre-synaptic sites are shown in black, post-synaptic sites in yellow.