DeepFly3D, a deep learningbased approach for 3D limb and appendage tracking in tethered, adult Drosophila
Abstract
Studying how neural circuits orchestrate limbed behaviors requires the precise measurement of the positions of each appendage in threedimensional (3D) space. Deep neural networks can estimate twodimensional (2D) pose in freely behaving and tethered animals. However, the unique challenges associated with transforming these 2D measurements into reliable and precise 3D poses have not been addressed for small animals including the fly, Drosophila melanogaster. Here, we present DeepFly3D, a software that infers the 3D pose of tethered, adult Drosophila using multiple camera images. DeepFly3D does not require manual calibration, uses pictorial structures to automatically detect and correct pose estimation errors, and uses active learning to iteratively improve performance. We demonstrate more accurate unsupervised behavioral embedding using 3D joint angles rather than commonly used 2D pose data. Thus, DeepFly3D enables the automated acquisition of Drosophila behavioral measurements at an unprecedented level of detail for a variety of biological applications.
https://doi.org/10.7554/eLife.48571.001Introduction
The precise quantification of movements is critical for understanding how neurons, biomechanics, and the environment influence and give rise to animal behaviors. For organisms with skeletons and exoskeletons, these measurements are naturally made with reference to 3D joint and appendage locations. Paired with modern approaches to simultaneously record the activity of neural populations in tethered, behaving animals (Dombeck et al., 2007; Seelig et al., 2010; Chen et al., 2018), 3D joint and appendage tracking promises to accelerate the discovery of neural control principles, particularly in the genetically tractable and numerically simple nervous system of the fly, Drosophila melanogaster.
However, algorithms for reliably estimating 3D pose in such small Drosophilasized animals have not yet been developed. Instead, multiple alternative approaches have been taken. For example, one can affix and use small markers—reflective, colored, or fluorescent particles—to identify and reconstruct keypoints from video data (Bender et al., 2010; Kain et al., 1910; Todd et al., 2017). Although this approach works well on humans (Moeslund and Granum, 2000), in smaller, Drosophilasized animals markers likely hamper movements and are difficult to mount on submillimeter scale limbs. Most importantly, measurements of one or even two markers for each leg (Todd et al., 2017) cannot fully describe 3D limb kinematics. Another strategy has been to use computer vision techniques that operate without markers. However, these measurements have been restricted to 2D pose in freely behaving flies. Before the advent of deep learning, this was accomplished by matching the contours of animals seen against uniform backgrounds (Isakov et al., 2016), measuring limb tip positions using complex TIRFbased imaging approaches (Mendes et al., 2013), or measuring limb segments using active contours (Uhlmann et al., 2017). In addition to being limited to 2D rather than 3D pose, these methods are complex, timeconsuming, and errorprone in the face of long data sequences, cluttered backgrounds, fast motion, and occlusions that naturally occur when animals are observed from a single 2D perspective.
As a result, in recent years the computer vision community has largely forsaken these techniques in favor of deep learningbased methods. Consequently, the efficacy of monocular 3D human pose estimation algorithms has greatly improved. This is especially true when capturing human movements for which there is enough annotated data to train deep networks effectively. Walking and upright poses are prime examples of this, and stateoftheart algorithms (Pavlakos et al., 2017a; Tome et al., 2017; Popa et al., 2017; Morenonoguer, 2017; Martinez et al., 2017; Mehta et al., 2017; Rogez et al., 2017; Pavlakos et al., 2017b; Zhou et al., 2017; Tekin et al., 2017; Sun et al., 2017) now deliver impressive realtime results in uncontrolled environments. Increased robustness to occlusions can be obtained by using multicamera setups (Elhayek et al., 2015; Rhodin et al., 2016; Simon et al., 2017; Pavlakos et al., 2017b) and triangulating the 2D detections. This improves accuracy while making it possible to eliminate false detections.
These advances in 2D pose estimation have also recently been used to measure behavior in laboratory animals. For example, DeepLabCut provides a userfriendly interface to DeeperCut, a stateoftheart human pose estimation network (Mathis et al., 2018), and LEAP (Pereira et al., 2019) can successfully track limb and appendage landmarks using a shallower network. Still, 2D pose provides an incomplete representation of animal behavior: important information can be lost due to occlusions, and movement quantification is heavily influenced by perspective.
Approaches used to translate human 2D to 3D pose have also been applied to larger animals, like lab mice and cheetahs (Nath et al., 2019), but require the use of calibration boards. These techniques cannot be easily transferred for the study of small animals like Drosophila: adult flies are approximately 2.5 mm long and precisely registering multiple camera viewpoints using traditional approaches would require the fabrication of a prohibitively small checkerboard pattern, along with the tedious labor of using a small, external calibration pattern. Moreover, flies have many appendages and joints, are translucent, and in most laboratory experiments are only illuminated using infrared light (to avoid visual stimulation)—precluding the use of color information.
To overcome these challenges, we introduce DeepFly3D, a deep learningbased software pipeline that achieves comprehensive, rapid, and reliable 3D pose estimation in tethered, behaving adult Drosophila (Figure 1, Figure 1—video 1). DeepFly3D is applied to synchronized videos acquired from multiple cameras. It first uses a stateoftheart deep network (Newell et al., 2016) and then enforces consistency across views. This makes it possible to eliminate spurious detections, achieve high 3D accuracy, and use 3D pose errors to further finetune the deep network to achieve even better accuracy. To register the cameras, DeepFly3D uses a novel calibration mechanism in which the fly itself is the calibration target. During the calibration process, we also employ sparse bundle adjustment methods, as previously used for human pose estimation (Takahashi et al., 2018; Triggs et al., 2000; Puwein et al., 2014). Thus, the user does not need to manufacture a prohibitively small calibration pattern, or repeat cumbersome calibration protocols. We explain how users can modify the codebase to extend DeepFly3D for 3D pose estimation in other animals (see Materials and methods). Finally, we demonstrate that unsupervised behavioral embedding of 3D joint angle data is robust against problematic artifacts present in embeddings of 2D pose data. In short, DeepFly3D delivers 3D pose estimates reliably, accurately, and with minimal manual intervention while also providing a critical tool for automated behavioral data analysis.
Results
DeepFly3D
The input to DeepFly3D is video data from seven cameras. These images are used to identify the 3D positions of 38 landmarks per animal: (i) five on each limb – the thoraxcoxa, coxafemur, femurtibia, and tibiatarsus joints as well as the pretarsus, (ii) six on the abdomen  three on each side, and (iii) one on each antenna  for measuring head rotations. Our software incorporates the following innovations designed to ensure automated, highfidelity, and reliable 3D pose estimation.
Calibration without an external calibration pattern
Estimating 3D pose from multiple images requires calibrating the cameras to achieve a level of accuracy commensurate with the target size—a difficult challenge when measuring leg movements for an animal as small as Drosophila. Therefore, instead of using a typical external calibration grid, DeepFly3D uses the fly itself as a calibration target. It detects arbitrary points on the fly’s body and relies on bundleadjustment (Chavdarova et al., 2018) to simultaneously assign 3D locations to these points and to estimate the positions and orientations of each camera. To increase robustness, it enforces geometric constraints that apply to tethered flies with respect to limb segment lengths and ranges of motion.
Geometrically consistent reconstructions
Starting with a stateoftheart deep network for 2D keypoint detection in individual images (Newell et al., 2016), DeepFly3D enforces geometric consistency constraints across multiple synchronized camera views. When triangulating 2D detections to produce 3D joint locations, it relies on pictorial structures and belief propagation message passing (Felzenszwalb and Huttenlocher, 2005) to detect and further correct erroneous pose estimates.
Selfsupervision and active learning
DeepFly3D also uses multiple view geometry as a basis for active learning. Thanks to the redundancy inherent in obtaining multiple views of the same animal, we can detect erroneous 2D predictions for correction that would most efficiently train the 2D pose deep network. This approach greatly reduces the need for timeconsuming manual labeling (Simon et al., 2017). We also use pictorial structure corrections to finetune the 2D pose deep network. Selfsupervision constitutes 85% of our training data.
2D pose performance and improvement using pictorial structures
We validated our approach using a challenging dataset of 2,063 image frames manually annotated using the DeepFly3D annotation tool and sampled uniformly from each camera. Images for testing and training were 480 × 960 pixels. The test dataset included challenging frames and occasional motion blur to increase the difficulty of pose estimation. For training, we used a final training dataset of 37,000 frames, an overwhelming majority of which were first automatically corrected using pictorial structures. On test data, we achieved a Root Mean Square Error (RMSE) of 13.9 pixels. Compared with a ground truth RMSE of 12.4 pixels – via manual annotation of 210 images by a new human expert – our Network Annotation/Manual Annotation ratio of 1.12 (13.9 pixels / 12.4 pixels) is similar to the ratio of another stateoftheart network (Mathis et al., 2018): 1.07 (2.88 pixels / 2.69 pixels). Setting a 50 pixel threshold (approximately one third the length of the femur) for PCK (percentage of correct keypoints) computation, we observed a 98.2% general accuracy before applying pictorial structures. Notably, if we reduced our threshold to 30 or 20 pixels, we still achieved 95% or 89% accuracy, respectively (Figure 2A).
To test the performance of our network in a low data regime, we trained a twostacked network using groundtruth annotations data from seven cameras (Figure 2B). We compared the results to an asymptotic prediction error (i.e. the error observed when the network is trained using the full dataset of 40,000 annotated images) and to the variability observed in human annotations of 210 randomly selected images. We measured an asymptotic MAE (mean absolute error) of 10.5 pixels and a human variability MAE of 9.2 pixels. With 800 annotations, our network achieved a similar accuracy to manual annotation and was near the asymptotic prediction error. Further annotation yielded diminishing returns.
Although our network achieves high accuracy, the error is not isotropic (Figure 2C). The tarsus tips (i.e. pretarsus) exhibited larger error than the other joints, perhaps due to occlusions from the spherical treadmill, and higher positional variance. Increased error observed for bodycoxa joints might be due to the difficulty of annotating these landmarks from certain camera views.
To correct the residual errors, we applied pictorial structures. This strategy fixed 59% of the remaining erroneous predictions, increasing the final accuracy to 99.2%, from 98.2%. These improvements are illustrated in Figure 3. Pictorial structure failures were often due to pose ambiguities resulting from heavy motion blur. These remaining errors were automatically detected with multiview redundancy using Equation 6, and earmarked for manual correction using the DeepFly3D GUI.
3D pose permits robust unsupervised behavioral classification
Unsupervised behavioral classification approaches enable the unbiased quantification of animal behavior by processing data features—image pixel intensities (Berman et al., 2014; Cande et al., 2018), limb markers (Todd et al., 2017), or 2D pose (Pereira et al., 2019)—to cluster similar behavioral epochs without user intervention and to automatically distinguish between otherwise similar actions. However, with this sensitivity may come a susceptibility to features unrelated to behavior including changes in image size or perspective resulting from differences in camera angle across experimental systems, variable mounting of tethered animals, and interanimal morphological variability. In theory, each of these issues can be overcome—providing scale and rotational invariance—by using 3D joint angles rather than 2D pose for unsupervised embedding.
To test this possibility, we performed unsupervised behavioral classification (Figure 4 and Figure 5) on video data taken during optogenetic stimulation experiments that repeatedly and reliably drove certain behaviors. Specifically, we optically activated CsChrimson (Klapoetke et al., 2014) to elicit backward walking in MDN>CsChrimson animals (Figure 5—video 1) (Bidaye et al., 2014), or antennal grooming in aDN>CsChrimson animals (Figure 5—video 2) (Hampel et al., 2015). We also stimulated control animals lacking the UASCsChrimson transgene (Figure 5—video 3) (MDNGAL4/+ and aDNGAL4/+). First, we performed unsupervised behavioral classification using 2D pose data from three adjacent cameras containing keypoints for three limbs on one side of the body. Using these data, we generated a behavioral map (Figure 4A). In this map each individual cluster would ideally represent a single behavior (e.g. backward walking, or grooming) and be populated by nearly equal amounts of data from each of the three cameras. This was not the case: data from each camera covered nonoverlapping regions and clusters (Figure 4B–D). This effect was most pronounced when comparing regions populated by cameras 1 and 2 versus camera 3. Therefore, because the underlying behaviors were otherwise identical (data across cameras were from the same animals and experimental time points), we can conclude that unsupervised behavioral classification of 2D pose data is sensitive to being corrupted by viewing angle differences.
By contrast, performing unsupervised behavioral classification using DeepFly3Dderived 3D joint angles resulted in a map (Figure 5) with a clear segregation and enrichment of clusters for different GAL4 driver lines and their associated behaviors, i.e. backward walking (Figure 5—video 4), grooming (Figure 5—video 5), and forward walking (Figure 5—video 6). Thus, 3D pose overcomes serious issues arising from unsupervised embedding of 2D pose data, enabling more reliable and robust behavioral data analysis.
Discussion
We have developed DeepFly3D, a deep learningbased 3D pose estimation system that is optimized for quantifying limb and appendage movements in tethered, behaving Drosophila. By using multiple synchronized cameras and exploiting multiview redundancy, our software delivers robust and accurate pose estimation at the submillimeter scale. Ultimately, we may work solely with monocular images by lifting the 2D detections (Pavlakos et al., 2017b) to 3D or by directly regressing to 3D (Tekin et al., 2017) as has been achieved in human pose estimation studies. Our approach relies on supervised deep learning to train a neural network that detects 2D joint locations in individual camera images. Importantly, our network becomes increasingly competent as it runs: By leveraging the redundancy inherent to a multiplecamera setup, we iteratively reproject 3D pose to automatically detect and correct 2D errors, and then use these corrections to further train the network without user intervention.
None of the techniques we have put together—an approach for multiplecamera calibration that uses the animal itself rather than an external apparatus, an iterative approach to inferring 3D pose using graphical models as well as optimization based on dynamic programming and belief propagation, and a graphical user interface and active learning policy for interacting with, annotating, and correcting 3D pose data—are flyspecific. They could easily be adapted to other limbed animals, from mice to primates and humans. The only thing that would have to change significantly are the dimensions of the experimental setup. This would remove the need to deal with the very small scales Drosophila requires and would, in practice, make pose estimation easier. In the Materials and methods section, we explain in detail how organismspecific features of DeepFly3D—bone segment length, number of legs, and camera focal distance—can be modified to study, for example, humans, primates, rodents, or other insects.
As in the past, we anticipate that the development of new technologies for quantifying behavior will open new avenues and enhance existing lines of investigation. For example, deriving 3D pose using DeepFly3D can improve the resolution of studies examining how neuronal stimulation influences animal behavior (Cande et al., 2018; McKellar et al., 2019), the precision and predictive power of efforts to define natural action sequences (Seeds et al., 2014; McKellar et al., 2019), the assessment of interventions that target models of human disease (Feany and Bender, 2000; Hewitt and Whitworth, 2017), and links between neural activity and animal behavior—when coupled with recording technologies like 2photon microscopy (Seelig et al., 2010; Chen et al., 2018). Importantly, 3D pose improves the robustness of unsupervised behavioral classification approaches. Therefore, DeepFly3D is a critical step toward the ultimate goal of achieving fullyautomated, highfidelity behavioral data analysis.
Materials and methods
With synchronized Drosophila video sequences from seven cameras in hand, the first task for DeepFly3D is to detect the 2D location of 38 landmarks. These 2D locations of the same landmarks seen across multiple views are then triangulated to generate 3D pose estimates. This pipeline is depicted in Figure 6. First, we will describe our deep learningbased approach to detect landmarks in images. Then, we will explain the triangulation process that yields full 3D trajectories. Finally, we will describe how we identify and correct erroneous 2D detections automatically.
2D pose estimation
Deep network architecture
Request a detailed protocolWe aim to detect five joints on each limb, six on the abdomen, and one on each antenna, giving a total of 38 keypoints per time instance. To achieve this, we adapted a stateoftheart Stacked Hourglass human pose estimation network (Newell et al., 2016) by changing the input and output layers to accommodate a new input image resolution and a different number of tracked points. A single hourglass stack consists of residual bottleneck modules with max pooling, followed by upsampling layers and skip connections. The first hourglass network begins with a convolutional layer and a pooling layer to reduce the input image size from 256 × 512 to 64 × 128 pixels. The remaining hourglass input and output tensors are 64 × 128. We used 8 stacks of hourglasses in our final implementation. The output of the network is a stack of probability maps, also known as heatmaps or confidence maps. Each probability map encodes the location of one keypoint, as the belief of the network that a given pixel contains that particular tracked point. However, probability maps do not formally define a probability distribution; their sum over all pixels does not equal 1.
2D pose training dataset
Request a detailed protocolWe trained our network for 19 keypoints, resulting in the tracking of 38 points when both sides of the fly are taken into account. Determining which images to use for training purposes is critical. The intuitively simple approach—training with randomly selected images—may lead to only marginal improvements in overall network performance. This is because images for which network predictions can already be correctly made give rise to only small gradients during training. On the other hand, manually identifying images that may lead to incorrect network predictions is highly laborious. Therefore, to identify such challenging images, we exploited the redundancy of having multiple camera views (see section 3D pose correction). Outliers in individual camera images were corrected automatically using images from other cameras, and frames that still exhibited large reprojection errors on multiple camera views were selected for manual annotation and network retraining. This combination of self supervision and active learning permits faster training using a smaller manually annotated dataset (Simon et al., 2017). The full annotation and iterative training pipeline is illustrated in Figure 6. In total, 40,063 images were annotated: 5,063 were labeled manually in the first iteration, 29,000 by automatic correction, and 6,000 by manually correcting those proposed by the active learning strategy.
Deep network training procedure
Request a detailed protocolWe trained our Stacked Hourglass network to regress from 256 × 512 pixel grayscale video images to multiple 64 × 128 probability maps. Specifically, during training and testing, networks output a 19 × 64 × 128 tensor; one 64 × 128 probability map per tracked point. During training, we created probability maps by embedding a 2D Gaussian with mean at the groundtruth point and 1px symmetrical extent (i.e. $\sigma =1px$) on the diagonal of the covariance matrix. We calculated the loss as the ${L}_{2}$ distance between the groundtruth and predicted probability maps. During testing, the final network prediction for a given point was the probability map pixel with maximum probability. We started with a learning rate of 0.0001 and then multiplied the learning rate by a factor of 0.1 once the loss function plateaued for more than five epochs. We used an RMSPROP optimizer for gradient descent, following the original Stacked Hourglass implementation, with a batchsize of eight images. Using 37,000 training images, the Stacked Hourglass network usually converges to a local minimum after 100 epochs (20 h on a single GPU).
Network training details
Request a detailed protocolVariations in each fly’s position across experiments are handled by the translational invariance of the convolution operation. In addition, we artificially augment training images to improve network generalization for further image variables. These variables include (i) illumination conditions – we randomly changed the brightness of images using a gamma transformation, (ii) scale – we randomly rescaled images between 0.80x  1.20x, and (iii) rotation – we randomly rotated images and corresponding probability maps ±15°. This augmentation was enough to compensate for real differences in the size and orientation of tethered flies across experiments. Furthermore, as per general practice, the mean channel intensity was subtracted from each input image to distribute annotations symmetrically around zero. We began network training using pretrained weights from the MPII human pose dataset (Andriluka et al., 2014). This dataset consists of more than 25,000 images with 40,000 annotations, possibly with multiple groundtruth human pose labels per image. Starting with a pretrained network results in faster convergence. However, in our experience, this does not affect final network accuracy in cases with a large amount of training data. We split the dataset into 37,000 training images, 2,063 testing images, and 1,000 validation images. None of these subsets shared common images or common animals, to ensure that the network could generalize across animals, and experimental setups. 5,063 of our training images were manually annotated, and the remaining data were automatically collected using belief propagation, graphical models, and active learning, (see section 3D pose correction). Deep neural network parameters need to be trained on a dataset with manually annotated groundtruth key point positions. To initialize the network, we collected annotations using a custom multicamera annotation tool that we implemented in JavaScript using Google Firebase (Figure 7). The DeepFly3D annotation tool operates on a simple webserver, easing the distribution of annotations across users and making these annotations much easier to inspect and control.
Computing hardware and software
Request a detailed protocolWe trained our model on a desktop computing workstation running on an Intel Core i97900X CPU, 32 GB of DDR4 RAM, and a GeForce GTX 1080. With 37,000 manually and automatically labeled images, training takes nearly 20 h on a single GeForce GTX 1080 GPU. Our code is implemented with Python 3.6, Pytorch 0.4 and CUDA 9.2. Using this desktop configuration, our network can run at 100 FramesPerSecond (FPS) using the 8stack variant of the Hourglass network, and can run at 420 FPS using the smaller 2stack version. Thanks to an effective initialization step, calibration takes 3–4 s. Error checking and error correction can be performed at 100 FPS and 10 FPS, respectively. Error correction is only performed in response to large reprojection errors and does not create a bottleneck in the overall speed of the pipeline.
Accuracy analysis
Request a detailed protocolConsistent with the human pose estimation literature, we report accuracy as Percentage of Correct Keypoints (PCK) and Root Mean Squared Error (RMSE). PCK refers to the percentage of detected points lying within a specific radius from the groundtruth label. We set this threshold as 50 pixels, which is roughly one third of the 3D length of the femur. The final estimated position of each keypoint was obtained by selecting the pixel with the largest probability value on the relevant probability map. We compared DeepFly3D’s annotations with manually annotated groundtruth labels to test our model’s accuracy. For RMSE, we report the square root of average pixel distance between the prediction and the groundtruth location of the tracked point. We remove trivial points such as the bodycoxa and coxafemur—which remain relatively stationary—to fairly evaluate our algorithms and to prevent these points from dominating our accuracy measurements.
From 2D landmarks to 3D trajectories
In the previous section, we described our approach to detect 38 2D landmarks. Let ${\mathbf{\mathbf{x}}}_{c,j}\in {\mathbb{R}}^{2}$ denote the 2D position of landmark $j$ in the image acquired by camera $c$. For each landmark, our task is now to estimate the corresponding 3D position, ${\mathbf{\mathbf{X}}}_{j}\in {\mathbb{R}}^{3}$. To accomplish this, we used triangulation and bundleadjustment (Hartley and Zisserman, 2000) to compute 3D locations, and we used pictorial structures (Felzenszwalb and Huttenlocher, 2005) to enforce geometric consistency and to eliminate potential errors caused by misdetections. We present these steps below.
Pinhole camera model
Request a detailed protocolThe first step is to model the projection operation that relates a specific ${\mathbf{\mathbf{X}}}_{j}$ to its seven projections in each camera view ${\mathbf{\mathbf{x}}}_{c,j}$. To make this easier, we follow standard practice and convert all Cartesian coordinates $\left[\begin{array}{c}\hfill {x}_{c},{y}_{c},{z}_{c}\hfill \end{array}\right]$ to homogeneous ones $\left[\begin{array}{c}\hfill {x}_{h},{y}_{h},{z}_{h},s\hfill \end{array}\right]$ such that ${x}_{c}={x}_{h}/s$, ${y}_{c}={y}_{h}/s$, ${z}_{c}={z}_{h}/s$. From now on, we will assume that all points are expressed in homogeneous coordinates and omit the $h$ subscript. Assuming that these coordinates are expressed in a coordinate system whose origin is in the optical center of the camera and whose zaxis is its optical axis, the 2D image projection $\left[\begin{array}{c}\hfill u,v\hfill \end{array}\right]$ of a 3D homogeneous point $\left[\begin{array}{c}\hfill x,y,z,1\hfill \end{array}\right]$ can be written as
where the 3 × 4 matrix $\mathbf{\mathbf{K}}$ is known as the intrinsic parameters matrix—scaling in the $x$ and $y$ direction and image coordinates of the principal point ${c}_{x}$ and ${c}_{y}$—that characterizes the camera settings.
In practice, the 3D points are not expressed in a camera fixed coordinate system, especially in our application where we use seven different cameras. Therefore, we use a world coordinate system that is common to all cameras. For each camera, we must therefore convert 3D coordinates expressed in this world coordinate system to camera coordinates. This requires rotating and translating the coordinates to account for the position of the camera’s optical center and its orientation. When using homogeneous coordinates, this is accomplished by multiplying the coordinate vector by a 4 × 4 extrinsic parameters matrix
where $\mathbf{\mathbf{R}}$ is a 3 × 3 rotation matrix and $\mathbf{\mathbf{T}}$ a 3 × 1 translation vector. Combining Equation 1 and Equation 2 yields
where $\mathbf{\text{P}}=\mathbf{\text{MK}}$ is a 3 × 4 matrix.
Camera distortion
Request a detailed protocolThe pinhole camera model described above is an idealized one. The projections of real cameras deviate from it. These deviations are referred to as distortions and must be accounted for. The most significant distortion is known as radial distortion because the error grows with the distance from the image center. For the cameras we use, radial distortion can be expressed as
where $\left[\begin{array}{c}\hfill u,v\hfill \end{array}\right]$ is the actual projection of a 3D point and $\left[\begin{array}{c}\hfill {u}_{\text{pinhole}},{v}_{\text{pinhole}}\hfill \end{array}\right]$ is the one the pinhole model predicts. In other words, the four parameters $\{{k}_{1}^{x},{k}_{2}^{x},{k}_{1}^{y},{k}_{2}^{y}\}$ characterize the distortion. From now on, we will therefore write the full projection as
where ${f}_{p}$ denotes the ideal pinhole projection of Equation 3 and ${f}_{d}$ the correction of Equation 4.
Triangulation
Request a detailed protocolWe can associate to each of the seven cameras a projection function ${\pi}_{c}$ like the one in Equation 5, where $c$ is the camera number. Given a 3D point and its projections ${\mathbf{\mathbf{x}}}_{c}$ in the images, its 3D coordinates can be estimated by minimizing the reprojection error
where ${e}_{c}$ is one if the point was visible in image $c$ and zero otherwise. In the absence of camera distortion, that is, when the projection $\pi $ is a purely linear operation in homogeneous coordinates, this can be done for any number of cameras by solving a Singular Value Decomposition (SVD) problem (Hartley and Zisserman, 2000). In the presence of distortions, we replace the observed $u$ and $v$ coordinates of the projections by the corresponding ${u}_{\text{pinhole}}$ and ${u}_{\text{pinhole}}$ values of Equation 5 before performing the SVD.
Camera calibration
Request a detailed protocolTriangulating as described above requires knowing the projection matrices ${\mathbf{\mathbf{P}}}_{c}$ of Equation 3 for each camera $c$, corresponding distortion parameters $\{{k}_{1}^{x},{k}_{2}^{x},{k}_{1}^{y},{k}_{2}^{y}\}$ of Equation 4, together with the intrinsic parameters of focal length and principal point offset. In practice, we use the focal length and principal point offset provided by the manufacturer and estimate the remaining parameters automatically: the three translations and three rotations for each camera that define the corresponding matrix $\mathbf{\mathbf{M}}$ of extrinsic parameters along with the distortion parameters.
To avoid having to design the exceedingly small calibration pattern that more traditional methods use to estimate these parameters, we use the fly itself as calibration pattern and minimize the reprojection error of Equation 6 for all joints simultaneously while allowing the camera parameters to also change. In other words we look for
where ${\mathbf{\mathbf{X}}}_{\mathbf{\mathbf{j}}}$ and ${\mathbf{\mathbf{x}}}_{c,j}$ are the 3D locations and 2D projections of the landmarks introduced above and $\rho $ denotes the Huber loss. Equation 7 is known as bundleadjustment (Hartley and Zisserman, 2000). Huber loss is defined as
Replacing the squared loss by the Huber loss makes our approach more robust to erroneous detections ${\mathbf{\mathbf{x}}}_{c,j}$. We empirically set $\delta $ to 20 pixels. Note that we perform this minimization with respect to ten degreesoffreedom per camera: three translations, three rotations, and four distortions.
For this optimization to work properly, we need to initialize these 10 parameters and we need to reduce the number of outliers. To achieve this, the initial distortion parameters are set to zero. We also produce initial estimates for the three rotation and three translation parameters by measuring the distances between adjacent cameras and their relative orientations. To initialize the rotation and translation vectors, we measure the distance and the angle between adjacent cameras, from which we infer rough initial estimates. Finally, we rely on epipolar geometry (Hartley and Zisserman, 2000) to automate outlier rejection. Because the cameras form a rough circle and look inward, the epipolar lines are close to being horizontal Figure 8A. Thus, corresponding 2D projections must belong to the same image rows, or at most a few pixels higher or lower. In practice, this means checking if all 2D predictions lie in nearly the same rows and discarding a priori those that do not.
3D pose correction
The triangulation procedure described above can produce erroneous results when the 2D estimates of landmarks are wrong. Additionally, it may result in implausible 3D poses for the entire animal because it treats each joint independently. To enforce more global geometric constraints, we rely on pictorial structures (Felzenszwalb and Huttenlocher, 2005) as described in Figure 9. Pictorial structures encode the relationship between a set of variables (in this case the 3D location of separate tracked points) in a probabilistic setting using a graphical model. This makes it possible to consider multiple 2D locations ${\mathbf{\mathbf{x}}}_{c,j}$ for each landmark ${\mathbf{\mathbf{X}}}_{c}$ instead of only one. This increases the likelihood of finding the true 3D pose.
Generating multiple candidates
Request a detailed protocolInstead of selecting landmarks as the locations with the maximum probability in maps output by our Stacked Hourglass network, we generate multiple candidate 2D landmark locations ${x}_{c,j}$. From each probability map, we select 10 local probability maxima that are at least one pixel apart from one another. Then, we generate 3D candidates by triangulating 2D candidates in every tuple of cameras. Because a single point is visible from at most four cameras, this results in at most $\left(\begin{array}{c}4\\ 2\end{array}\right)\times {10}^{2}$ candidates for each tracked point.
Choosing the best candidates
Request a detailed protocolTo identify the best subset of resulting 3D locations, we introduce the probability distribution $P(LI,\theta )$ that assigns a probability to each solution $L$, consisting of 38 sets of 2D points observed from each camera. Our goal is then to find the most likely one. More formally, $P$ represents the likelihood of a set of tracked points $L$, given the images, model parameters, camera calibration, and geometric constraints. In our formulation, $I$ denotes the seven camera images $I={\{{I}_{c}\}}_{1\le c\le 7}$ and $\theta $ represents the set of projection functions ${\pi}_{c}$ for camera $c$ along with a set of length distributions ${S}_{i,j}$ between each pair of points $i$ and $j$ that are connected by a limb. $L$ consists of a set of tracked points ${\{{L}_{i}\}}_{1\le i\le n}$, where each ${L}_{i}$ describes a set of 2D observations ${l}_{i,c}$ from multiple camera views. These are used to triangulate the corresponding 3D point locations $\overline{{l}_{i}}$. If the set of 2D observations is incomplete, as some points are totally occluded in some camera views, we triangulate the 3D point ${\overline{l}}_{i}$ using the available ones and replace the missing observations by projecting the recovered 3D positions into the images, ${\pi}_{c}({\overline{l}}_{i})$ in Equation 3. In the end, we aim to find the solution $\widehat{L}=\underset{L}{argmax}P(LI,\theta ).$ This is known as Maximum a Posteriori (MAP) estimation. Using Bayes rule, we write
where the two terms can be computed separately. We compute $P(IJ,\theta )$ using the probability maps ${H}_{j,c}$ generated by the Stacked Hourglass network for the tracked point $j$ for camera $c$. For a single joint $j$ seen by camera $c$, we model the likelihood of observing that particular point using $P({H}_{j,c}{l}_{j,c})$, which can be directly read from the probability maps as the pixel intensity. Ignoring the dependency between the cameras, we write the overall likelihood as the product of the individual likelihood terms
which can be read directly from the probability maps as pixel intensities and represent the network’s confidence that a particular keypoint is located at a particular pixel. When a point is not visible from a particular camera, we assume the probability map only contains a constant nonzero probability, which does not affect the final solution. We express $P(L\theta )$ as
where pairwise dependencies $P({\overline{l}}_{i},{\overline{l}}_{j}{S}_{i,j})$ between two variables respect the segment length constraint when the variables are connected by a limb. The length of segments defined by pairs of connected 3D points follows a normal distribution. Specifically, we model $P({\overline{l}}_{i},{\overline{l}}_{j}{S}_{i,j})$ as ${S}_{i,j}({\overline{l}}_{i},{\overline{l}}_{j})=\mathcal{N}(\parallel {\overline{l}}_{i}{\overline{l}}_{j}\parallel {\mu}_{i,j},{\sigma}_{i,j})$. We model the reprojection error for a particular point $j$ as ${\prod}_{c=1}^{7}{e}_{c,j}{\parallel {\pi}_{c}({\overline{l}}_{j}){l}_{c,j}\parallel}_{2}^{1}$ which is set to zero using the variable ${e}_{c,j}$ denoting the visibility of the point $j$ from camera $c$. If a 2D observation for a particular camera is manually set by a user with the DeepFly3D GUI, we take it to be the only possible candidate for that particular image and we set $P({L}_{j}H)$ to 1, where $j$ denotes the manually assigned pixel location.
Solving the MAP problem using the MaxSum algorithm
Request a detailed protocolFor general graphs, MAP estimation with pairwise dependencies is NPhard and therefore intractable. However, in the specific case of noncyclical graphs, it is possible to solve the inference problem using belief propagation (Bishop, 2006). Since the fly’s skeleton has a root and contains no loops, we can use a message passing approach (Felzenszwalb and Huttenlocher, 2005). It is closely related to Viterbi recurrence and propagates the unary probabilities $P({L}_{j}{L}_{i})$ between the edges of the graph starting from the root and ending at the leaf nodes. This first propagation ends with the computation of the marginal distribution for the leaf node variables. During the subsequent backward iteration, as $P({L}_{j})$ for leaf node is computed, the point ${L}_{j}$ with maximum posterior probability is selected in $O(k)$ time, where $k$ is the upper bound on the number of proposals for a single tracked point. Next, the distribution $P({L}_{i}{L}_{j})$ is calculated, adjacent nodes for the leaf node. Continuing this process on all the remaining points results in a MAP solution for the overall distribution $P(L)$, as shown in Figure 9, with overall $O({k}^{2})$ computational complexity.
Learning the parameters
Request a detailed protocolWe learn the parameters for the set of pairwise distributions ${S}_{i,j}$ using a maximum likelihood process and assuming the distributions to be Gaussian. We model the segment length ${S}_{i,j}$ as the euclidean distance between the points ${\overline{l}}_{j}$ and ${\overline{l}}_{j}$. We then solve for $\underset{S}{argmax}P(SL,\theta )$, assuming segments have a Gaussian distribution resulting from the Gaussian noise in point observations $L$. This gives us the mean and variance, defining each distribution ${S}_{i.j}$. We exclude the same points that we removed from the calibration procedure, that exhibit high reprojection error.
In practice, we observe a large variance for pretarsus values (Figure 10). This is because occlusions occasionally shorten visible tarsal segments. To eliminate the resulting bias, we treat these limbs differently from the others and model the distribution of tibiatarsus and tarsustip points as a Beta distribution, with parameters found using a similar Maximum Likelihood Estimator (MLE) formulation. Assuming the observation errors to be Gaussian and zerocentered, the bundle adjustment procedure can also be understood as an MLE of the calibration parameters (Triggs et al., 2000). Therefore, the entire set of parameters for the formulation can be learned using MLE. Thus, prior information about potentially occluded targets can be used to guide inference. For example, in a headfixed rodent, the left eye may not always be visible from the rightside of the animal. This information can be incorporated into DeepFly3D’s inference system in the file, skeleton.py, by editing the function camera_see_joint. Afterwards, predictions from occluded cameras will not be used to triangulate a given 3D point. If no such information is provided, every prediction will be used to triangulate a given 3D point.
The pictorial structure formulation can be further expanded using temporal information, penalizing large movements of a single tracked point between two consecutive frames. However, we abstained from using temporal information more extensively for several reasons. First, temporal dependencies would introduce loops in our pictorial structures, thus making exact inference NPhard as discussed above. This can be handled using loopy belief propagation algorithms (Murphy et al., 1999) but requires multiple message passing rounds, which prevents realtime inference without any theoretical guarantee of optimal inference. Second, the rapidity of Drosophila limb movements makes it hard to assign temporal constraints, even with fast video recording. Finally, we empirically observed that the current formulation, enforcing structured poses in a single temporal frame, already eliminates an overwhelming majority of falsepositives inferred during the pose estimation stage of the algorithm.
Modifying DeepFly3D to study other animals
Request a detailed protocolDeepFly3D does not assume a circular camera arrangement or that there is one degree of freedom in the camera network. Therefore, it could easily be adapted for 3D pose estimation in other animals, ranging from rodents to primates and humans. We illustrate this flexibility by using DeepFly3D to capture human 3D pose in the Human 3.6M Dataset (http://vision.imar.ro/human3.6m/description.php) very popular, publicly available computer vision benchmarking dataset generated using four synchronized cameras (Ionescu et al., 2014; Ionescu et al., 2011) (Figure 11).
Generally, for any new dataset, the user first needs to provide an initial set of manual annotations. The user would describe the number of tracked points and their relationships to one another in a python setup file. Then, in a configuration file, the user specifies the number of cameras along with the resolutions of input images and output probability maps. DeepFly3D will then use these initial manual annotations to (i) train the 2D Stacked Hourglass network, (ii) perform camera calibration without an external calibration pattern, (iii) learn the epipolar geometry to perform outlier detection, and (iv) learn the segment length distributions ${S}_{i,j}$. After this initial bootstrapping, DeepFly3D can be then used with pictorial structures and active learning to iteratively improve pose estimation accuracy.
The initial manual annotations can be performed using the DeepFly3D Annotation GUI. Afterwards, these annotations can be downloaded from the Annotation GUI as a CSV file using the Save button (Figure 7). Once the CSV file is placed in the images folder, DeepFly3D will automatically read and display the annotations. To train the Stacked Hourglass network, use the csvpath flag while running pose2d.py (found in deepfly/pose2d/). DeepFly3D will then train the Stacked Hourglass network by performing transfer learning using the large MPII dataset and the smaller set of user manual annotations.
To perform camera calibration, the user should select the Calibration button on the GUI Figure 12. DeepFly3D will then perform bundle adjustment (Equation 7) and save the camera parameters in calibration.pickle (found in the images folder). The path of this file should then be added to Config.py to initialize calibration. These initial calibration parameters will then be used in further experiments for fast and accurate convergence. If the number of annotations is insufficient for accurate calibration, or if bundle adjustment is converging too slowly, an initial rough estimate of the camera locations can be set in Config.py. As long as a calibration is set in Config.py, DeepFly3D will use it as a projection matrix to calculate the epipolar geometry between cameras. This step is necessary to perform outlier detection on further calibration operations.
DeepFly3D will also learn the distribution ${S}_{i,j}$, whose nonzero entries are found in skeleton.py. One can easily calculate these segment length distribution parameters using the functions provided with DeepFly3D. CameraNetwork class (found under deepfly/GUI/), will then automatically load the points and calibration parameters from the images folder. The function CameraNetwork.triangulate will convert 2D annotation points into 3D points using the calibration parameters. The ${S}_{i,j}$ parameters can then be saved using the pickle library (the save path can be set in Config.py). The calcBoneParams method will then output the segment lengths’ mean and variance. These values will then be used with pictorial structures (Equation 8).
We provide further technical details for how to adapt DeepFly3D to other multiview datasets online (https://github.com/NeLyEPFL/DeepFly3D [Günel et al., 2019] copy archived at https://github.com/elifesciencespublications/DeepFly3D).
Experimental setup
We positioned seven Basler acA1920155um cameras (FUJIFILM AG, Niederhaslistrasse, Switzerland) 94 mm away from the tethered fly, resulting in a circular camera network with the animal in the center (Figure 13). We acquired 960 × 480 pixel video data at 100 FPS under 850 nm infrared ring light illumination (Stemmer Imaging, Pfäffikon Switzerland). Cameras were mounted with 94 mm W.D./1.00 x InfiniStix lenses (Infinity PhotoOptical GmbH, Göttingen). Optogenetic stimulation LED light was filtered out using 700 nm longpass optical filters (Edmund Optics, York UK). Each camera’s depth of field was increased using 5.8 mm aperture retainers (Infinity PhotoOptical GmbH). To automate the timing of optogenetic LED stimulation and camera acquisition triggering, we use an Arduino (Arduino, Sommerville, MA) and custom software written using the Basler camera API.
We assessed the optimal number of cameras for DeepFly3D and concluded that increasing the number of cameras increases accuracy by stabilizing triangulation. Specifically, we observed the following. (i) Calibration is not a significant source of error: calibrating with fewer than seven cameras does not dramatically increase estimation error. (ii) Having more cameras improves triangulation. Reducing the number of cameras down to four, even having calibrated with seven cameras, results in an increase of 0.05 mm triangulation error. This may be because the camera views are sufficiently different, having largely nonoverlapping 2Ddetection failure cases. Thus, the redundancy provided by having more cameras mitigates detection errors by finding a 3D pose that is consistent across at least two camera views.
Drosophila transgenic lines
Request a detailed protocolUASCsChrimson (Klapoetke et al., 2014) animals were obtained from the Bloomington Stock Center (Stock $\mathrm{\#}55135$). MDN1Gal4 (Bidaye et al., 2014) (VT44845DBD; VT50660AD) was provided by B. Dickson (Janelia Research Campus, Ashburn). aDNGal4 (Hampel et al., 2015)(R76F12AD; R18C11DBD), was provided by J. Simpson (University of California, Santa Barbara). Wildtype, PR animals were provided by M. Dickinson (California Institute of Technology, Pasadena).
Optogenetic stimulation experiments
Request a detailed protocolExperiments were performed in the late morning or early afternoon Zeitgeber time (Z.T.), inside a dark imaging chamber. An adult female animal 2–3 daysposteclosion (dpe), was mounted onto a custom stage (Chen et al., 2018) and allowed to acclimate for 5 min on an airsupported spherical treadmill (Chen et al., 2018). Optogenetic stimulation was performed using a 617 nm LED (Thorlabs, Newton, NJ) pointed at the dorsal thorax through a hole in the stage, and focused with a lens (LA1951, 01" f = 25.4 mm, Thorlabs, Newton, NJ). Tethered flies were otherwise allowed to behave spontaneously. Data were acquired in 9 s epochs: 2 s baseline, 5 s with optogenetic illumination, and 2 s without stimulation. Individual flies were recorded for five trials each, with oneminute intervals. Data were excluded from analysis if flies pushed their abdomens onto the spherical treadmill—interfering with limb movements—or if flies struggled during optogenetic stimulation, pushing their forelimbs onto the stage for prolonged periods of time.
Unsupervised behavioral classification
View detailed protocolTo create unsupervised embeddings of behavioral data, we mostly followed the approach taken by Todd et al. (2017) and Berman et al. (2014). We smoothed 3D pose traces using a 1€ filter. Then we converted them into angles to achieve scale and translational invariance (Casiez et al., 2012). Angles were calculated by taking the dot product from sets of three connected 3D positions. For the antennae, we calculated the angle of the line defined by two antennal points with respect to the groundplane. This way, we generated four angles per leg (two bodycoxa, one coxafemur, and one femurtibia), two angles for the abdomen (top and bottom abdominal stripes), and a single angle for the antennae (head tilt with respect to the axis of gravity). In total, we obtained a set of 20 angles, extracted from 38 3D points.
We transformed angular time series using a Continous Wavelet Transform (CWT) to create a posturedynamics space. We used the Morlet Wavelet as the mother wavelet, given its suitability to isolate periodic chirps of motion. We chose 25 wavelet scales to match dyadically spaced center frequencies between 5 Hz and 50 Hz. Then, we calculatd spectrograms for each postural timeseries by taking the magnitudes of the wavelet coefficients. This yields a 20 × 25 = 500dimensional timeseries, which was then normalized over all frequency channels to unit length, at each time instance. Then, we could treat each feature vector from each time instance as a distribution over all frequency channels.
Later, from the posturedynamics space, we computed a twodimensional representation of behavior by using the nonlinear embedding algorithm, tSNE (Maaten, 2008). tSNE embedded our highdimensional posturedynamics space onto a 2D plane, while preserving the highdimensional local structure, while sacrificing larger scale accuracy. We used the Kullback–Leibler (KL) divergence as the distance function in our tSNE algorithm. KL assesses the difference between the shapes of two distributions, justifying the normalization step in the preceding step. By analyzing a multitude of plots generated with different perplexity values, we empirically found a perplexity value of 35 to best suit the features of our posturedynamics space.
From this generated discrete space, we created a continuous 2D distribution, that we could then segment into behavioral clusters. We started by normalizing the 2D tSNE projected space into a 1000 × 1000 matrix. Then, we applied a 2D Gaussian convolution with a kernel of size $\sigma$ = 10 px. Finally, we segmented this space by inverting it and applying a Watershed algorithm that separated adjacent basins, yielding a behavioral map.
Data availability
All data generated and analyzed during this study are included in the DeepFly3D GitHub site: https://github.com/NeLyEPFL/DeepFly3D (copy archived at https://github.com/elifesciencespublications/DeepFly3D) and in the Harvard Dataverse.
References

Conference2d human pose estimation: new benchmark and state of the art analysisProceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3686–3693.https://doi.org/10.1109/CVPR.2014.471

Mapping the stereotyped behaviour of freely moving fruit fliesJournal of the Royal Society Interface 11:20140672.https://doi.org/10.1098/rsif.2014.0672

Conference1€ filter: a simple speedbased lowpass filter for noisy input in interactive systemsProceedings of the SIGCHI Conference on Human Factors in Computing Systems ACM. pp. 2527–2530.

ConferenceWILDTRACK: A MultiCamera HD Dataset for Dense Unscripted Pedestrian DetectionThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5030–5039.

ConferenceEfficient ConvnetBased MarkerLess motion capture in general scenes with a low number of camerasIEEE Conference on Computer Vision and Pattern Recognition (CVPR).https://doi.org/10.1109/CVPR.2015.7299005

Pictorial structures for object recognitionInternational Journal of Computer Vision 61:55–79.https://doi.org/10.1023/B:VISI.0000042934.15159.49

Mechanisms of Parkinson's Disease: Lessons from DrosophilaCurrent Topics in Developmental Biology 121:173–200.https://doi.org/10.1016/bs.ctdb.2016.07.005

ConferenceLatent structured models for human pose estimation2011 International Conference on Computer Vision IEEE. pp. 2220–2227.https://doi.org/10.1109/ICCV.2011.6126500

Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural EnvironmentsIEEE Transactions on Pattern Analysis and Machine Intelligence 36:1325–1339.https://doi.org/10.1109/TPAMI.2013.248

Recovery of locomotion after injury in Drosophila Melanogaster depends on proprioceptionThe Journal of Experimental Biology 219:1760–1771.https://doi.org/10.1242/jeb.133652

Legtracking and automated behavioural classification in DrosophilaNature Communications 2013:4.https://doi.org/10.1038/ncomms2908

Independent optical excitation of distinct neural populationsNature Methods 11:338–346.https://doi.org/10.1038/nmeth.2836

Visualizing High Dimensional Data Using tSNEJournal of Machine Learning Research pp. 2579–2605.

DeepLabCut: markerless pose estimation of userdefined body parts with deep learningNature Neuroscience 21:1281–1289.https://doi.org/10.1038/s415930180209y

ConferenceVnect: RealTime3D Human Pose Estimation with a Single RGB CameraSIGGRAPH.

ConferenceMultiple cues used in modelbased human motion capture.Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580). pp. 362–367.https://doi.org/10.1109/AFGR.2000.840660

Conference3d human pose estimation from a single image via distance matrix regressionCVPR.

ConferenceLoopy belief propagation for approximate inference: an empirical studyOnference on Uncertainty in Artificial Intelligence. pp. 467–475.

Stacked Hourglass Networks for Human Pose Estimation483–499, European Conference on Computer Vision, Stacked Hourglass Networks for Human Pose Estimation, Springer.

ConferenceHarvesting multiple views for MarkerLess 3D human pose annotationsIn: CVPR.

Fast animal pose estimation using deep neural networksNature Methods 16:117–125.https://doi.org/10.1038/s4159201802345

ConferenceDeep multitask architecture for integrated 2D and 3D human sensingIn: CVPR.

Joint Camera Pose Estimation and 3D Human Pose Estimation in a MultiCamera Setup473–487, Accelerated Kmeans Clustering Using Binary Random Projection, Joint Camera Pose Estimation and 3D Human Pose Estimation in a MultiCamera Setup, springer.

ConferenceGeneral automatic human shape and motion capture using volumetric contour cuesECCV.

ConferenceHand keypoint detection in single images using multiview bootstrappingIn: CVPR.

ConferenceHuman pose as calibration pattern; 3D human pose estimation with multiple unsynchronized and uncalibrated camerasThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.

ConferenceLearning to fuse 2D and 3D image cues for monocular body pose estimationICCV.

ConferenceWeaklysupervised transfer for 3d human pose estimation in the wildIEEE International Conference on Computer Vision.
Article and author information
Author details
Funding
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (175667)
 Daniel Morales
 Pavan Ramdya
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (181239)
 Daniel Morales
 Pavan Ramdya
EPFL (iPhD)
 Semih Günel
Microsoft Research (JRC Project)
 Helge Rhodin
Swiss Government Excellence Postdoctoral Scholarship (2018.0483)
 Daniel Morales
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Celine Magrini and Fanny Magaud for image annotation assistance, Raphael Laporte and Victor Lobato Ríos for helping to develop camera acquisition software.
Version history
 Received: May 18, 2019
 Accepted: September 28, 2019
 Accepted Manuscript published: October 4, 2019 (version 1)
 Version of Record published: November 4, 2019 (version 2)
Copyright
© 2019, Günel et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 7,901
 views

 870
 downloads

 124
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Cell Biology
 Neuroscience
Like other volume electron microscopy approaches, automated tapecollecting ultramicrotomy (ATUM) enables imaging of serial sections deposited on thick plastic tapes by scanning electron microscopy (SEM). ATUM is unique in enabling hierarchical imaging and thus efficient screening for target structures, as needed for correlative light and electron microscopy. However, SEM of sections on tape can only access the section surface, thereby limiting the axial resolution to the typical size of cellular vesicles with an order of magnitude lower than the acquired xy resolution. In contrast, serialsection electron tomography (ET), a transmission electron microscopybased approach, yields isotropic voxels at full EM resolution, but requires deposition of sections on electronstable thin and fragile films, thus making screening of large section libraries difficult and prone to section loss. To combine the strength of both approaches, we developed ‘ATUMTomo, a hybrid method, where sections are first reversibly attached to plastic tape via a dissolvable coating, and after screening detached and transferred to the ETcompatible thin films. As a proofofprinciple, we applied correlative ATUMTomo to study ultrastructural features of bloodbrain barrier (BBB) leakiness around microthrombi in a mouse model of traumatic brain injury. Microthrombi and associated sites of BBB leakiness were identified by confocal imaging of injected fluorescent and electrondense nanoparticles, then relocalized by ATUMSEM, and finally interrogated by correlative ATUMTomo. Overall, our new ATUMTomo approach will substantially advance ultrastructural analysis of biological phenomena that require cell and tissuelevel contextualization of the finest subcellular textures.

 Neuroscience
Restingstate brain networks (RSNs) have been widely applied in health and disease, but the interpretation of RSNs in terms of the underlying neural activity is unclear. To address this fundamental question, we conducted simultaneous recordings of wholebrain restingstate functional magnetic resonance imaging (rsfMRI) and electrophysiology signals in two separate brain regions of rats. Our data reveal that for both recording sites, spatial maps derived from bandspecific local field potential (LFP) power can account for up to 90% of the spatial variability in RSNs derived from rsfMRI signals. Surprisingly, the time series of LFP band power can only explain to a maximum of 35% of the temporal variance of the local rsfMRI time course from the same site. In addition, regressing out time series of LFP power from rsfMRI signals has minimal impact on the spatial patterns of rsfMRIbased RSNs. This disparity in the spatial and temporal relationships between restingstate electrophysiology and rsfMRI signals suggests that electrophysiological activity alone does not fully explain the effects observed in the rsfMRI signal, implying the existence of an rsfMRI component contributed by ‘electrophysiologyinvisible’ signals. These findings offer a novel perspective on our understanding of RSN interpretation.