Automatic and accurate reconstruction of long-range axonal projections of single-neuron in mouse brain

  1. Lin Cai
  2. Taiyu Fan
  3. Xuzhong Qu
  4. Ying Zhang
  5. Xianyu Gou
  6. Quanwei Ding
  7. Weihua Feng
  8. Tingting Cao
  9. Xiaohua Lv
  10. Xiuli Liu
  11. Qing Huang
  12. Tingwei Quan  Is a corresponding author
  13. Shaoqun Zeng
  1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, China
  2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, China
  3. School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, China

eLife Assessment

This important paper takes a novel approach to the problem of automatically reconstructing long-range axonal projections from stacks of images. The key innovation is to separate the identification of sections of an axon from the statistical rules used to constrain global structure. The authors provide compelling evidence that their method is a significant improvement over existing measures in circumstances where the labelling of axons and dendrites is relatively dense.

https://doi.org/10.7554/eLife.102840.3.sa0

Abstract

Single-neuron axonal projections reveal the route map of neuron output and provide a key cue for understanding how information flows across the brain. Reconstruction of single-neuron axonal projections requires intensive manual operations in tens of terabytes of brain imaging data and is highly time-consuming and labor-intensive. The main issue lies in the need for precise reconstruction algorithms to avoid reconstruction errors, yet current methods struggle with densely distributed axons, focusing mainly on skeleton extraction. To overcome this, we introduce a point assignment-based method that uses cylindrical point sets to accurately represent axons and a minimal information flow tree model to suppress the snowball effect of reconstruction errors. Our method successfully reconstructs single-neuron axonal projections across hundreds of GBs (Gigabytes) images within a mouse brain with an average of 80% f1-score, while current methods only provide less than 40% f1-score reconstructions from a few hundred MBs (Megabytes) images. This huge improvement is helpful for high-throughput mapping of neuron projections.

Introduction

Neuronal axons in general project to different brain regions, and their projection distribution is an essential cue for neuron type identification, neuronal circuit construction, and deeper insight into how information flows in the brain (Huang and Luo, 2015; Meijering, 2010; Parekh and Ascoli, 2013; Zingg et al., 2014). Advances in optical imaging and molecular labeling techniques (Cai et al., 2019; Chung and Deisseroth, 2013; Çiçek et al., 2016; Kim and Schnitzer, 2022; Li et al., 2010; Osten and Margrie, 2013) have allowed us to observe the entire mouse brain at single-axon resolution and provided the database for the study of neuronal projection patterns (Foster et al., 2021; Gao et al., 2022; Muñoz-Castañeda et al., 2021; Peng et al., 2021; Qiu et al., 2024; Sun et al., 2019; Xu et al., 2021; Zeng, 2022). However, the reconstruction of these long-range projected axons still requires extensive manual annotation in tens of TBs volumetric images (Çiçek et al., 2016; Friedmann et al., 2020; Wang et al., 2019; Winnubst et al., 2019; Zhou et al., 2021), this labor-intensive process creates a major bottleneck for high-throughput mapping of neuronal projections (Zeng and Sanes, 2017).

The difficulties in reconstructing the long-range projections of neurons are as follows. On the one hand, while molecular labeling techniques can shed light on a very small fraction of neurons, a significant fraction of neuronal axons is still densely distributed due to the morphological complexity of neurons. The identification of densely distributed axons is considered an open problem in the field (Li et al., 2019; Lichtman and Denk, 2011; Zeng and Sanes, 2017), which still has no good solution. On the other hand, during neuron reconstruction, reconstruction errors accumulate, and a single reconstruction error can result in an entire branch being connected erroneously to other neurons or missing (Helmstaedter, 2013). Therefore, effective large-scale reconstruction of neurons requires extremely high identification accuracy of dense axons. The contradictions between these two aspects seem hard to reconcile.

The current neuron reconstruction frameworks focus on how to accurately extract skeletons of neurites and establish the connections between skeletons (Meijering, 2010; Peng et al., 2015). The BigNeuron project (Manubens-Gil et al., 2023) conducts a systematic evaluation of 35 automatic neuron reconstruction algorithms, all of which are based on tracing neurite skeletons and can be divided into two categories: local and global approaches. In the local approach (Choromanska et al., 2012; Li et al., 2020; Peng et al., 2011; Yang et al., 2013), the localization of the next skeleton point requires computation of the signal anisotropy of the image region near the current skeleton point. Localization errors typically occur when this image region contains other neurite signals. The global approach (Li et al., 2019; Türetken et al., 2011; Xiao and Peng, 2013) first generates multiple seed points that are commonly located at the neurite centerline and then establishes connections between these seed points for generating the neurite skeleton. This connection relies mainly on spatial location information, resulting in densely distributed neurites being connected to each other erroneously. While deep learning is widely used in neuron reconstruction (Huang et al., 2020; Li and Shen, 2020; Liu et al., 2022; Zhou et al., 2018), - mainly for neuronal image segmentation and signal intensity enhancement to reduce reconstruction errors - even ideal segmentation with all neurite centers identified and their signal enhanced still exhibits significant reconstruction errors with skeleton-based methods (Figure 1—figure supplement 1).

To address the problem of error accumulation during neuron reconstruction, it is common practice to utilize statistical information of neuron morphology, such as the angle between two neurites, to identify and remove spurious connections between the reconstructed neurites. This strategy (Li et al., 2019; Quan et al., 2016) achieves 80% reconstruction accuracy from GB-scale images under two critical constraints: (1) precise identification of neurite terminals and branch points is required for accurate angle computation and morphological analysis, and (2) somatic locations are required as critical information to remove some links between the reconstructed neurites to ensure that each cell body can be mapped to the root node of a single tree structure. However, for long-range axonal reconstruction across hundreds of GB-scale images, the strategy is not effective to eliminate the accumulation of errors due to factors such as the position of the axon at a distance from the soma and slight morphological differences between axon junction and termination. Consequently, current long-range projection reconstruction methods are semi-automatic and require substantial human intervention (Gao et al., 2023; Wang et al., 2019; Winnubst et al., 2019; Zhou et al., 2021).

Here, we propose a new neuron reconstruction method called PointTree, which aims at how to assign foreground points in neuronal images to their own neurons. In the workflow, we design a constrained Gaussian clustering method to partition the foreground region of a neuronal image into a series of columnar regions whose centerline belongs to only a single neurite. This operation essentially eliminates the interference of different neurites in the dense reconstruction. In addition, each columnar region is characterized by a minimal envelope ellipsoid for constructing connections between columnar regions, which forms the neurite shapes. Based on the reconstructed shapes, we design a minimal information flow tree model to suppress the cumulative reconstruction error. Using the proposed method, we successfully achieve accurate reconstruction of long-range projections of neurons across hundreds of gigabytes of volumetric image.

Results

The architecture and principles of PointTree

In the design of PointTree, we have developed a series of optimization problems to assign foreground points in data blocks to their respective neurites. Firstly, the segment network is utilized for each data block to obtain foreground points. Subsequently, we apply a constrained Gaussian clustering method (Reynolds, 2009) to partition the foreground points into columnar regions and determine their geometrical parameters by solving the minimum-volume covering ellipsoids problem (Sun and Freund, 2004). Using these geometrical parameters, we construct a 0–1 assignment problem (Volgenant, 1996) to establish links between these columnar regions. Finally, skeletons are extracted from these linked columnar regions to reduce data redundancy by using region growing (Harris, 2011). The key procedures for neuron reconstruction are presented in Figure 1A.

Figure 1 with 4 supplements see all
Summary and principle of PointTree.

(A) The reconstruction procedure of PointTree involves the generation, clustering, and connection of foreground points (the first row). Within this procedure, three optimization problems are designed to allocate the foreground points into their respective neurites (the second row). (B) Schematic diagram of information flow score calculation. In a neurite branch with a fixed root node (green circle), the information flow score is calculated based on the assumption that a neurite has few directional changes. The assumption determines the neurite directly connecting to the root node (red), resulting in two branch angles used to calculate the information flow score. (C) Statistical analysis of the consistency between the minimum information flow and the real situation. For 208 neurite branches, the information flow scores are calculated as ground truth according to their manually determined skeletons and root nodes. These scores are then displayed in ascending order. The root nodes of neurite branches are changed to generate both maximum and minimum information flow scores. (D) One neurite branch is decomposed into two by minimizing the total information flow scores. (E) Performance of different methods on separating closely paralleled neurites. In PointTree, a single neurite is represented by a series of ellipsoids whose centerlines are not simultaneously located within different neurites. They are connected using an ellipsoid shape, which results in perfect reconstruction (Left). However, skeleton-based methods fail to separate two closely paralleled neurites due to interference from other signals (Red circle in middle) or connections being interfered with by another neighboring skeleton point (Red circle in right).

In addition, PointTree employed the statistical prior information to reduce the reconstruction errors. At the branching point (node) of the neurites, it can be divided into three segments of neurite skeletons. The segment entering the node forms two angles with the other two segments exiting the node respectively. The node angle is defined as the smaller angle between the entering segment and each exiting segment (Figure 1B). With node angle, we can identify the single complete neurite and its corresponding node angles. The skeleton of the neurite is generally smooth, with very few sudden directional changes and even fewer at the nodes. So, the node angles should be as small as possible. For neuronal branches, the node angles are uniquely determined when the root node is given, and the sum of the negative cosine of these node angles expressed by information flow value is small when the root node is correctly identified. This rule is defined as a minimal information flow tree (MIFT).

In image blocks of densely distributed neurites, we used semi-automatic software (Zhou et al., 2021) extracting 208 neuronal branches and identifying their root nodes. For each branch, we calculated their information flow values as the ground-truth information flow values (Figure 1C). To validate MIFT, we looped through all possible structures of these branches by changing the root node in order to compute the maximum and minimum information flow values (Figure 1C). It is evident that, for most neuronal branches (195/208), the ground-truth values of the information flow achieve the minimum value, suggesting that MIFT rule is reasonable. We utilized MIFT to modify skeleton structure and remove spurious connections between reconstructed neurites (Figure 1D and Figure 1—figure supplement 2), both for reconstructions within individual blocks and for the fused reconstruction in adjacent blocks.

PointTree has the capability to separate densely distributed neurites. When dealing with two parallel neurites in close proximity to each other, their shapes can be represented by a series of columnar regions (the left panels of Figure 1E). We have modified the Gaussian clustering algorithm by constraining the estimated mean and covariance parameters so that the cluster shape approaches a columnar shape. Additionally, foreground points within the same cluster are connected to each other. These two features ensure that the central line in the columnar region belongs to only a single neurite, which is crucial for separating densely packed neurites. Furthermore, we utilize the minimum volume covering ellipsoid to extract shape information of the columnar regions for constructing their connections. These designs enable PointTree to successfully reconstruct packed neurites. In contrast, skeleton-based local methods rely on determining the position of the next skeleton point based on the shape anisotropy of the region. This often leads to localization errors when there are two neurite image signals within a region (the middle panels of Figure 1E). When it comes to skeleton-based global methods, although seed points can be located at individual neurite centers, accurately constructing connections between these seed points proves challenging due to the reliance on distance between points and susceptibility to interference from densely distributed neurites (the right panels of Figure 1E).

The merits of PointTree in dense reconstruction

In dense reconstruction, one of the main concerns is how well to separate densely distributed neurites that behave as crossover and closely paralleled neurites. These neurites can be manually identified by visualization with different view angles (Figure 2—figure supplement 1). We compared PointTree with several skeleton-based methods such as neuTube (Feng et al., 2015), PHDF (Radojevic and Meijering, 2017), NGPST (Quan et al., 2016), and MOST (Wu et al., 2014) in performing this task. We manually labeled the locations where neurites are crossover or closely parallel from five 256×256 × 256 image blocks. For a fair comparison, all methods are performed on segmented images derived from the segmentation network. Figure 2A illustrates the process of PointTree’s separation of crossover and closely paralleled neurites. PointTree can successfully separate the densely distributed neurites in a range of 71.4% and 91.7%, while these skeleton-based methods only separate 25.0% densely distributed neurites (Figure 2B) at most. We also present the comparison of PointTree and other methods on some reconstruction examples in which multi-crossover neurites (Figure 2C) and closely paralleled neurites are involved. PointTree provides the perfect reconstruction while other methods fail to reconstruct these neurites.

Figure 2 with 1 supplement see all
Performance of PointTree on crossover and closely paralleled neurites.

(A) The reconstruction process of crossover and closely paralleled neurites. (B) Quantitative evaluation of PointTree and several skeleton-based methods on identifying closely distributed neurites. The box plots present the statistical information (n=5) in which the horizontal line in the box, the lower and upper borders of the box represent the median value, the first quartile (Q1), and the third quartile (Q3), respectively. The vertical black lines indicate 1.5 × IQR. (C) Three reconstruction examples derived from PointTree and several skeleton-based methods.

Furthermore, we present the quantitative results derived from PointTree and five widely used skeleton-based reconstruction methods, including APP2, neuTube, NGPST, PHDF, and MOST. Eight 256×256 × 256 image blocks that include many densely distributed neurites are of the testing dataset. All reconstruction algorithms are performed on the segmentation images of these testing datasets. We give the intuitive reconstruction comparisons (Figure 3A). PointTree provides the reconstruction close to the ground truth. The skeleton-based methods generate lots of reconstruction errors and incorrectly combine multi-neurites into a single branch. The quantitative reconstructions suggest that PointTree is far superior to skeleton-based methods (Figure 3B). For PointTree, the average precision is above 90%, both recall and f1-score are above 85%. The skeleton-based methods cannot provide a good solution to separate the densely packed neurites. The f1-score of these reconstructions ranges from 30% to 40%, which indicates the ineffective reconstructions.

Comparison of reconstruction methods on image blocks containing densely distributed neurites.

(A) Comparison of reconstruction performance among six methods, including PointTree, NGPST, neuTube, APP2, PHDF, and MOST. Individual neurite branches are delineated in different colors. (B) Quantitative evaluation of reconstruction performance using precision, recall, and f1-score. The box plots display these three evaluation indexes (n=8). In the box, the horizontal line represents the median value. The box shows the interquartile range (IQR) from the first quartile (Q1) to the third quartile (Q3). The vertical lines indicate 1.5×IQR.

Reconstruction of data with different signal-to-noise ratios

In the field of neuronal reconstruction, data acquired by different imaging systems often exhibit varying signal-to-noise ratio (SNR) characteristics. For some low-SNR datasets, severe noise interference makes it difficult even for human observers to accurately identify neurite structures. To systematically evaluate PointTree’s reconstruction performance across datasets with different SNRs, we selected and analyzed data from three imaging systems: light sheet microscopy (Stelzer et al., 2021) (LSM), fluorescent micro-optical sectioning tomography (Wang et al., 2021) (fMOST), and high-definition fluorescent micro-optical sectioning tomography (Zhong et al., 2021) (HD-fMOST), with SNR ranges of 2–7, 6–12, and 9–14, respectively (Figure 4A).

Reconstruction performance of PointTree across data with different signal-to-noise ratios.

(A) Data blocks from light sheet microscopy (LSM), fluorescent micro-optical sectioning tomography (fMOST), and high-definition fluorescent micro-optical sectioning tomography (HD-fMOST) are selected. SNR and corresponding reconstruction scores with PointTree are drawn with line charts. Each dataset is of sample size n=25 and each data block size of 128×128 × 128. (B) shows reconstruction performance of PointTree on different datasets. (C) The zoomed-in view displays the region marked by white box in the first column of (B), with 25 foreground points and 25 background points sampled respectively. The signal intensities of both the foreground points and background points are plotted in the adjacent line charts.

Experimental results demonstrate that, thanks to the powerful feature extraction capability of the deep learning network, the trained neural network achieves satisfactory segmentation performance (third row in Figure 4B) even on low-SNR data (first two columns in Figure 4B, top row), laying a solid foundation for subsequent accurate reconstruction (bottom row in Figure 4B). Quantitative analysis reveals that PointTree delivers stable reconstruction performance across all SNR levels. Specifically: for LSM data (sample size n=25, mean SNR = 5.01), average precision = 96.0%, recall = 88.7%, and f1-score=91.0%; for fMOST data (sample size n=25, mean SNR = 8.68), average precision = 95.8%, recall = 87.3%, and f1-score=90.0%; for HD-fMOST data (sample size n=25, mean SNR = 11.4), average precision = 98.1%, recall = 91.0%, and f1-score=93.3% (Figure 4A).

Notably, in low-SNR LSM data, background regions contain more artifactual signals (first panel in Figure 4C) due to similar intensity distributions between background and foreground points. In contrast, high-SNR datasets (fMOST and HD-fMOST) exhibit cleaner background features with distinct intensity separation between background noise and neurite signals (second and third panel in Figure 4C). This observation highlights the critical impact of SNR on reconstruction quality while simultaneously validating the robustness of PointTree, which is aided by the segmentation network, across diverse SNR conditions.

Restrain error accumulation in the reconstruction

In order to achieve accurate axon reconstruction, it is essential to effectively suppress the snowballing accumulation of reconstruction errors. The performance of the minimal information flow tree (MIFT) in retraining the reconstruction errors is evaluated in this study. Figure 5A presents six 512×512 × 512 image blocks and their reconstructions using PointTree in the first column. The reconstruction fusing procedure is then performed on these axonal reconstructions (Figure 5A). By employing MIFT to revise the reconstructions and remove false connections between axons, reasonable reconstructions are achieved. In contrast, when the same fusion procedure is conducted without MIFT to revise the reconstruction, almost all axons are incorrectly connected together (bottom-right panel in Figure 5A).

Minimal information flow tree effectively restrains the accumulation of reconstruction errors.

(A) Reconstruction comparisons in the fusion process with MIFT and without MIFT are shown. Both image blocks and neurite reconstructions are displayed using maximum projection along the z-direction. Two fusion procedures are performed, and the final fusion reconstructions are presented in the third column. (B) The variation in reconstruction accuracy during the fusion process with MIFT and without MIFT is illustrated. Blue points represent the initial reconstruction accuracy from six image blocks, while green points and red points denote the merged reconstruction accuracy with MIFT and without MIFT, respectively. The squares represent the mean values of the evaluation indexes. (C) The skeletons of three neurite branches from the final merged reconstructions with MIFT are shown. Additionally, corresponding ground-truth reconstructions and reconstruction evaluations are also presented.

We furthermore measure the enhancement in the reconstruction accuracy achieved by MIFT (Figure 5B). For the initial reconstructions from six image blocks, the average of f1-score is about 0.86. By using MIFT, the average of f1-score is above 0.8 for the reconstructions from two image blocks which are generated with the first fusion. In the second fusion (top-right panel in Figure 5A), the f1-score still keeps 0.79. In contrast, without MIFT, the first fusion leads to a drop of about f1-score of 0.3. After the second fusion, the f1-score is less than 0.2. We also present some reconstruction examples after two fusions in Figure 5C, which are close to the ground truth. These results suggest that the MIFT model takes consideration of the proper structure of axons and thus can restrain the error communications in the reconstruction fusion process.

Long-range axonal projections reconstruction

We applied PointTree for long-range axon reconstruction. The testing image block has the size of 11226×8791 × 1486 voxels and includes axons from eight neurons (Figure 6A). We also used GTree to manually reconstruct these neurons as the ground-truth reconstruction (Figure 6B). Except for the labeling of training data for segmentation network and of the axon starting points of a single neuron, the whole reconstruction process is totally automatic. The results show PointTree successfully recovered the axonal morphology of these eight neurons without manual interference (Figure 6C and Videos 1 and 2), and we compared these reconstructions with ground truth (Figure 6—figure supplement 1). The average precision is above 85% and the average recall and f1-score are above 80% (Figure 6E). In addition, we presented the axon reconstructions from two image blocks (Figure 6C1 and C2) which include a large number of densely distributed axons. This reconstruction performance suggests that the point assignment and the minimal information flow tree mode, as the two key strategies in PointTree, perform well in long-range axonal reconstruction.

Figure 6 with 2 supplements see all
Long-range axonal reconstruction using PointTree.

(A) The image block contains eight neurons in the ventral posteromedial thalamic region. The projection of these neurons includes a large number of densely distributed axons, which are enlarged in A1 and A2. (B) The reconstruction of the eight neurons is achieved by annotators with semi-automatic software GTree, serving as ground-truth reconstruction to evaluate automatic algorithms. The reconstructions B1 and B2 correspond to the image blocks A1 and A2. (C) Automatic reconstruction with PointTree results in reconstructions of the densely distributed axons, which are enlarged in C1 and C2. (D) A comparison between automatic reconstruction and ground-truth reconstruction of axonal projection for one neuron is shown. Green indicates consistent reconstruction, blue indicates missed branches, and red denotes branches from other neurons. (E) Quantitative analysis of long-range projections for these neurons is presented. Statistical information is displayed in boxes (n=8), the horizontal line in the box, the lower and upper borders of the box represent the median value, the first quartile (Q1) and the third quartile (Q3) respectively, the vertical black lines indicate 1.5 × IQR, while black points represent the accuracy of the reconstructions for these neurons.

Video 1
Reconstructed long-range axonal projections and raw image data shown in Figure 6, individual axonal projections are delineated in different colors.
Video 2
Trace one of the reconstructed projections shown in Figure 6.

We also applied PointTree to process another 10739×11226 × 3921 image blocks collected with HD-fMOST system (Zhong et al., 2021). The high signal-to-noise ratio in this optical system results in a significantly extended dynamic range of the signal. PointTree can effectively deal with this case, and all 14 long-range projections are successfully reconstructed (Figure 6—figure supplement 2). The quantitative results suggest that the average f1-score is above 90% (Table 1).

Table 1
Quantitative metrics comparing ground truth and reconstructed neurons are presented in Figure 6—figure supplement 2.
IDPrecisionRecallF1-Score
11.000.920.95
21.001.001.00
30.980.760.86
41.000.820.90
51.000.770.87
61.000.920.96
70.960.750.84
81.000.870.93
91.000.820.90
101.000.960.98
111.000.990.99
121.000.770.87
131.000.900.95
140.990.870.93

Despite the need to solve multiple large-scale optimization problems, the reconstruction speed using PointTree is generally faster than the imaging speed. For instance, in a typical scenario involving 254 image blocks with 512×512 × 512 voxels, the total time required for reconstruction is approximately 44 min. Even for a larger dataset comprising 821 image blocks with 512×512 × 512 voxels and including a significant number of sparsely distributed neurites, the total time cost amounts to about 60 min (Table 2). It should be noted that the time cost does not increase linearly as data volume increases due to the influence of neurite density on overall reconstruction time. In summary, PointTree demonstrates remarkable speed in reconstructing long-range axons (Video 3).

Table 2
Time cost of three modules in the entire reconstruction for two testing datasets shown in Figure 6, Figure 6—figure supplement 2.
block number(size: 512×512 × 512)Points clustering(mins)Clusters connection(mins)Reconstruction merging (mins)
25423183
82122353
Video 3
Example run of PointTree on Windows.

Discussion

We have presented an automated method for reconstructing the long-range projections of neurons. In this study, we address the problem of mutual interference among densely distributed neurites and the cumulative error during reconstruction by designing a reconstruction method based on point set assignment and the minimal information flow tree, respectively. As a result, our approach enables accurate reconstruction of long-range neuron projections from hundreds of gigabytes of data. This advance significantly enhances the efficiency of whole-brain-scale neuron reconstruction, bridging the substantial gap between factory-level generation of whole-brain-scale neuronal imaging data and tens of hours required to reconstruct one neuron.

Our approach is performed on image foregrounds where the segmented neurites have a fixed radius approximately equal to the total size of the three voxels. In this case, we can estimate the total number of foreground points (voxels) and set a suitable number of columnar regions for ensuring the anisotropy of each columnar region, which is based on the fact that the union of columnar regions equals the foreground region. The anisotropy of the columnar regions will reduce the difficulty in establishing their connection. The requirement that all segmented neurites have a relatively fixed radius can be fulfilled. For all neurites, the value of their voxels decreases as these voxels deviate from the nearest centerline. The deep learning network is able to grasp this feature and segment only the neurite centerline and its neighborhood. Typically, in reconstructions of neurons whose projections are distributed over hundreds to thousands of GBs of data, less than GB-sized images with labels are needed as training data. The labeling process takes a few hours, which is negligible for semi-automatic reconstruction of all neurons in the whole volume images.

We propose a new reconstruction mode centered on point set assignment instead of the current reconstruction mode focused on skeleton extraction. In the current reconstruction paradigm, most deep networks are used to enhance the signal-to-noise ratio of neuronal images and do not address the issue of signal interference during skeleton extraction. In contrast, our reconstruction approach is based on directly processing the foreground points generated by the deep learning network. With continued advances in deep learning techniques, the generality and accuracy of image segmentation will be continuously enhanced, thereby significantly boosting the application scope of our method in various scenarios. Essentially, our method can be applied to any skeleton tracking-based application scenario and effectively eliminate dense signal interference.

Our method still generates a few reconstruction errors. This is due to the following three aspects. First, our method directly handles image foregrounds, which leads to reconstruction errors when some neurites with weak image intensities are not identified. Second, relying solely on foreground point information and rule-based judgment methods may generate some connection errors when establishing connections between neurites. Finally, the minimal information flow tree’s fundamental assumption, that axons should be as smooth as possible, does not always hold true. In fact, real axons can take quite sharp turns (Figure 1—figure supplement 3) leading the algorithm to erroneously separate a single continuous axon into disjoint fibers (Figure 1—figure supplement 3). Therefore, for the automatic reconstruction of neurons on a brain-wide scale, further work is needed to enhance the imaging intensity and incorporate soma shapes and raw image signals for neurites connection recognition.

Materials and methods

Data collections

Request a detailed protocol

All animal experiments followed procedures approved by the Institutional Animal Ethics Committee of the Huazhong University of Science and Technology. The test datasets are collected through the preparation of two kinds of samples. For one C57BL/6 male mouse, 100 nl AAV-Cre virus and 100 nl of AAV-EF1α-DIO-EYFP virus were injected into the VPM nucleus at the same time. 21 days later, the chemical sectioning fluorescence tomography (CSFT) system (Wang et al., 2021) was used to acquire imaging data (Figures 16), more details can be seen in the reference (Zhang et al., 2021). For one C57BL/6 J male mouse, 100 nl of AAV-YFP was injected into the motor area. 21 days later, high-definition fluorescent micro-optical sectioning tomography (HD-fMOST) was used to acquire imaging data (Zhong et al., 2021; Figure 6—figure supplement 2).

Generation of foreground points

Request a detailed protocol

Our reconstruction method performs on the image foregrounds. Here, we used UNet3D (Çiçek et al., 2016) for image stacks segmentation without network structure modification. The detailed information about UNet3D can be found in the reference (Çiçek et al., 2016). Considering the requirement that the network output, the segmented neurites, have the relatively fixed radius, we calculate the distance field of the neurite’s skeleton as the ground truth for supervising the network. Initially, the semi-automatic software GTree was utilized to extract the neurite skeleton and subsequently interpolate the skeleton points. The interpolation operation ensured that the distance between any skeleton point and its nearest point was less than 1 μm. Subsequently, the interpolated skeleton points were used as centers to mark spherical regions with a radius of 5 voxels. These spherical regions served as candidate areas for foreground. Within these candidate areas, the distance from each point to its nearest interpolated skeleton point was calculated. Finally, the distances are mapped into Gaussian kernel distances, which form the Gaussian density map. This map normalized by maximum value leads to the distance field map to supervise UNet3D output.

In the training stage, Adam optimizer is used with an initial learning rate at 3e-4. The input image size is 128×128 × 128. Batch size is set to 1, the L1-norm is used as loss function to train the network. We presented the reconstructions from two kinds of fMOST datasets. One is from the reference (Zhang et al., 2021) and the other is from the reference (Zhong et al., 2021). Therefore, we created two sets of training data, each consisting of 20 512×512 × 512 image blocks (each divided into 64 image blocks of size 128×128 × 128). In each set, 10 image blocks contain densely distributed neurites, while the other 10 blocks contain sparsely distributed neurites. In the predicting stage, we applied the threshold operation to the distance field image. The voxels whose values are more than 0.5 are regarded as the foreground points.

Neuron Reconstruction based on Points assignment

Request a detailed protocol

For the image stack, we allocated the foreground points to their respective neurites and established connections between neurites by constructing three optimization models: (1) the constrained Gaussian mixture model divides the foreground points into a set of points, each of which has a column shape; (2) the minimum-volume covering ellipsoids model extracts the features of the column-shaped point set; (3) the 0–1 assignment optimization model establishes connections between the column-shaped point sets, resulting in the shapes of individual neurites, and then builds connections between the reconstructed neurites.

Constrained Gaussian mixture model

Request a detailed protocol

The three-dimensional Gaussian function exhibits an ellipsoidal shape in space, which we have utilized to approximate the columnar shape of local neurites. In this study, Gaussian distribution mixture functions with K components are employed to approximate the shape of all neurites in an image block. The component number K is obtained by point density and will be discussed later. Given the foreground points x1,x2,,xn, for each foreground points xi, the probability density function P(xi) is calculated as follows:

(1) P(xi)=j=1KπjN(xi|μj,Σj)

Here, N(xi|μj,Σj) is the Gaussian density function with mean value μj and covariance matrix Σj. Weight πj is the regularization parameter. N(xi|μj,Σj) is given by the formula:

(2) N(xi|μj,Σj)=12π3/2|Σj|1/2e12(xiμj)TΣj1(xiμj)

Based on probability density function, the conditional probability can be computed as:

(3) pi,j=P(xi|clusterj)=πjN(xi|μj,Σj)j=1KπjN(xi|μj,Σj)j=(1,2,...,K)

Here, pi,j is the conditional probability for xi to assign to the j-th cluster. If pi,k is the maximum value among {pi,1,...pi,K}, the foreground point xi will be assigned to the k-th cluster. All the points assigned to the k-th cluster form a columnar region. Considering that both the number of foreground points and component number are large, we have added some constrained conditions for Gaussian mixture model as follows:

(4) j=1Kπj=1
(5) I(μj)ε0,|Σj|ε1

j=1Kπj=1 refers to the fact that the total probability distribution normalizes to 1. I() represents the signal intensity from segment image, ε0 is the minimum signal intensity of foreground points and is set to 128 in the algorithm. I(μi)ε0 restrain the center of the Gaussian distribution to be a foreground point. |Σj|ε1 restrain the determinant of the covariance matrix which controls the suitable number of foreground points for each columnar region. ε1 is set to the cube of three times the average diameter of neurite.

Maximum likelihood is employed to estimate the parameters of Gaussian mixture model and the final optimization problem is formed as follows:

(6) (πj,μj,Σj)j=1,2,,K=argmaxi=1nP(xi)=argmaxi=1n(j=1KπjN(xi|μj,Σj))
(7) s.t.j=1Kπj=1,I(μj)ε0,|Σj|ε1

In solving this optimization problem, we employ peak density algorithm (Wei et al., 2023) to compute density for each foreground points and sort them in descending order. We first select a point as a seed point, and the foreground points within a radius of 5 centered on it will be excluded. Then we continue selecting seed points until all foreground points are either selected or excluded. The selected K seed points represent the initial K components. We select signal points from the median (based on density) to both sides as seed points, which can decrease the situations that seed points lie in the center of a crossover or the edge of neurites. This strategy can make the generated columnar regions be more reasonable. The positions of the K seed points are set to the initial (μ1,μ2,,μK). The initial setting of the covariance matrix is the identity matrix. The constrained Gaussian mixture model was solved by the EM algorithm (McLachlan and Krishnan, 2007), the EM algorithm is divided into two steps:

E-step: For each point xi, compute its probability within each Gaussian distribution using the probability density function:

(8) pi,j=πjN(xi|μj,Σj)j=1KπjN(xi|μj,Σj)

M-step: Update the mean value, covariance matrices, and weight vectors.

(9) πj=i=1npi,jn
(10) μj=i=1npi,jxii=1npi,j
(11) Σj=i=1npi,j(xiμj)(xiμj)Ti=1npi,j

Besides, the constrained Gaussian mixture model possesses additional constraints: I(μj)ε0 and |Σj|ε1. After finishing the M-step, μj with I(μj)<ε0 are selected. Eigenvalue decomposition is applied on Σj and obtains eigenvalues (γ1,γ2,γ3) in descending order and eigenvectors (v1,v2,v3). μj is updated along v1 and v1 to generate two new clusters with mean value and covariance matrices (uj,1,Σj,1) and (uj,2,Σj,2) as follows:

(12) uj,1=uj+v1γ2
(13) uj,2=ujv1γ2
(14) Σj,1=i=1npi,j(xiμj,1)(xiμj,1)Ti=1npi,j
(15) Σj,2=i=1npi,j(xiμj,2)(xiμj,2)Ti=1npi,j

For Σj>ε1, it will be updated as follows:

(16) Σj=ε1ΣjΣj

Iteration of E-step and M-step will continue until the k-th result {μk,Σk} and (k-1)-th result satisfy the stopping criteria:

(17) ukuk1uk1<εandΣkΣk1Σk1<ε

Here the division represents element-wise division and · denotes L2-norm and ε is set to 0.01.

Shape characterization of columnar regions

Request a detailed protocol

After deriving the columnar regions through solving the constrained Gaussian mixture model, it is imperative to characterize their geometric shape (terminals and centerlines). For this purpose, we calculate the minimum-volume ellipsoids that can fully encompass each individual columnar region. For cR3, QS++3, a three-dimensional ellipsoid can be defined as follows Sun and Freund, 2004:

(18) Ec,Q:={xR3|(xc)TQ(xc)1}

Here, c is the center of ellipsoid, Q represents the geometric shape, S++3 denotes the convex cone of 3×3 symmetric positive definite matrices. The volume of Ec,Q is given by the formula:

(19) Volume(Ec,Q)=π3/2Γ(3/2+1)1det(Q)

Here, Γ() is the standard gamma function of calculus, det(Q) means the determinant of matrix Q. Minimizing the volume of Ec,Q is equivalent to minimizing det(Q1/2). Therefore, for a columnar region with foreground points P{x1,x2,xm}, we define the target function as follows:

(20) P1:(c,Q)=argminc,Qdet(Q1/2)
(21) s.t.(xic)TQ(xic)1,i=1,2...m
(22) cCHull(P),QS++3

Here cCHull(Pi) restrain the solved center of ellipsoid to locate within the smallest convex hull formed by the clustering points. To solve this problem, a variable substitution A=Q1/2 and y=Q1/2c were applied to Equation 20 and Equation 21, the original problem P1 can be transformed into a convex optimization problem as follows:

(23) P2:(A,y)=argminA,y-lndet(A)
(24) s.t.(Axiy)T(Axiy)1,i=1,2,...,m
(25) AS++3

Through adding the logarithmic barrier function, we can obtain the following formula:

(26) P3:(A,y,θ)=argminA,y,θ-lndet(A)θi=1mln(zi)
(27) s.t.(Axiy)T(Axiy)+zi=1,i=1,2,...,m
(28) AS++3,zi>0

As θ varies in the interval (0,), the solution of P3 changes. When θ approaches 0, the optimal solution of P3 tends to the optimal solution of P2. By adding the dual multipliers di which satisfies dizi=θ, the optimality conditions can be written as:

(29) i=1mdi[(Axiy)xiT+xi(Axiy)T]A1=0
(30) i=1mdi(yAxi)=0
(31) (Axiy)T(Axiy)+zi=1i=1,2,...,m
(32) i=1mdizi=θ,i=1,2,...,m
(33) di,zi0

At this point, the error between the solution of the system of equations and the optimal solution of P3 is less than dTz. Through Equation 30, the explicit expression for solving y can be obtained as follows:

(34) y=AXdeTd

Here, X stands for a 3×m matrix [x1|x2|...|xm], e stands for vector of ones (1,1,...,1)1×mT and d stands for (d1,d2,...,dm)1×mT. Substitute Equation 34 into Equation 29, the equation for matrix A can be obtained by:

(35) (XDXTXddTXTeTd)A+A(XDXTXddTXTeTd)=A1

Here, D stands for a m×m diagonal matrix Diag(d1,d2,...,dm). And the explicit expression for A is formed as

(36) A=A(d)=[2(XDXTXddTXTeTd)]1/2

And explicit expression for y:

(37) y=[2(XDXTXddTXTeTd)]1/2XdeTd

Through substituting the above two equations to the system of Equations 29-33, variables A and y are eliminated. The following system of equations with only variables d and z can be obtained:

(38) f(d)+ze=0
(39) Dzθe=0
(40) di,zi0

Here, f(d) is nonlinear function of variable d:

(41) fi(d)=(xiXdeTd)[2(XDXTXddTXTeTd)]1(xiXdeTd)i=1,2,...,m

For a fixed barrier parameter θ, we employ Newton’s method to solve the system of equations. We use df(d) to represent the Jacobian matrix of f(d). Thus, the Jacobian matrix of the system of equations can be computed as follows:

(42) [df(d)IZD]

And the Newton’s direction is written as:

(43) Δ(d)=(df(d)D1Z)1(h1D1h2)
(44) Δ(z)=D1h2D1Z(df(d)D1Z)1(h1D1h2)
(45) h1=ezf(d),h2=θeDz

With initial (d0,z0), iterate with (dn,zn)=(dn1,zn1)+β~(Δ(dn1),Δ(zn1)) to obtain the final optimal solution, β~ represents the Newton’s step. Detailed process can see the pseudo code as follows:

Algorithm 1. Compute Newton’s direction.
Input: (d,z,θ) satisfying d,z>0, θ0


1. A2(d)=[2(XDXTXddTXTeTd)]


2. Σ(d)=(XXdeTeTd)A2(d)(XXdeTeTd)


3 df(d)=2(Σ(d)eTd+Σ(d)Σ(d))


4. (Δ(d),Δ(z))=((df(d)D1Z)1(h1D1h2),D1h2D1ZΔ(d))


Output: (Δ(d),Δ(z))
Algorithm 2. Process of solving P2.
Input: {x1,x2,...,xm}


1. r=0.99, (d0,z0)=(32me,ef(d0))


2.E=det(A(d))


3. while (|ef(d)z|>ε1 or dTzE>ε2)


4.   θ=dTz10m


5.  (Δ(d),Δ(z)) = Compute_Newton_direction (d,z)


6.  β¯=max{β|(d,z)+β(Δ(d),Δ(z)0)}


7.  β~=min(rβ¯,1)


8.  (d,z)=(d,z)+β~(Δ(d),Δ(z))


9.  E=det(A(d))


Output: Q=A(d)2,c=A(d)1y(d)

With the solved optimal solution of (Q,c), we then check whether c is located within the convex hull of the input point set {x1,x2,...,xm}. If it is not, a constrained Gaussian mixture model will be applied to partition it into two subsets and solve the minimum-volume covering ellipsoids problem again in the two subsets. Through solving the above minimum-volume covering ellipsoids problem, we can characterize the columnar regions more accurately.

Note that from constrained GMM, each cluster has the corresponding mean and covariance matrix of points in the cluster. These two values essentially describe the shape of the cluster. However, if these two values directly replace c* and Q*, the exported ellipsoid may only encompass a part of points in the cluster. For covering all points in the cluster, all elements in the covariance matrix are needed to be proportionally enlarged, but the volume of the corresponding ellipsoid is not minimum. These two cases will reduce the accuracy of the connections between clusters, that is columnar regions. So, we introduce the minimum-volume covering ellipsoid model to extract the shape of columnar region.

Skeleton generation using 0-1 assignment model

Request a detailed protocol

The 0–1 assignment model (Volgenant, 1996) can robustly and accurately establish connections between particles in live-cell imaging (Jaqaman et al., 2008). It is particularly effective in handling cases where particles are densely distributed, merged, or split. We analogize column regions to particles and apply the 0–1 assignment model to build the connections between column regions. For the i-th columnar region, the center and the two endpoints of the longest axis of its minimum-volume covering ellipsoid are denoted by ci,ti,0,ti,1. The direction refers to the pointing of the center point towards ti,k, k equal to 0 or 1. According to the direction and the endpoints, we design the cost matrix for building the 0–1 assignment model.

(46) C=[c(t1,0,t1,0)c(t1,0,t1,1)c(t1,0,tn,1)c(t1,1,t1,0)c(t1,1,t1,1)c(t1,1,tn,1)Dc(tn,1,t1,0)c(tn,1,t1,1)c(tn,1,tn,1)DD]4n×4n
(47) c(ti,i0,tj,j0)={100if(i=j)norm(ti,i0,tj,j0)(0.5×(θ(ti,i0,tj,j0)3+1.001))4if(ij)
(48) θ(ti,i0,tj,j0)=dir(ci,ti,i0),dir(ci,tj,j0)+dir(cj,tj,j0),dir(cj,ti,i0)dir(ci,ti,i0),dir(cj,tj,j0)

Here, D is 2n×2n auxiliary matrix all elements of which are all set 100. Both i0 and j0 in Equation 47 are equal to 0 or 1, labeling the two endpoints of the longest axis of the ellipsoid. norm(ti,i0,tj,j0) represents the Euclidean distance between ti,i0 and tj,j0. θ(ti,i0,tj,j0) describes the angle between two ellipsoids, that is two columnar regions. dir(ci,ti,i0) represents the line from point ci to ti,i0. dir(ci,ti,i0),dir(ci,tj,j0) represents cosine angle between the two lines. The threshold of 100 in D in Equation 46 and Equation 47 is an experimental value designed to ensure that the terminal points of neurites do not connect to more than one other terminal point.

After setting the cost matrix, the 0–1 assignment problem is defined as follows:

(49) A=argminAi=14nj=14nAijCij
(50) s.t.i=14nAi,j=1(j=1,2,,4n)
(51) j=14nAi,j=1(i=1,2,,4n)

Here, A represents the connectivity matrix between different terminals of columnar regions: if Ai,j=1, then establish connection between terminal i and terminal j, if Ai,j=0, then establish no connection between terminal i and terminal j.i=14nAi,j=1(j=1,2,,4n) and j=14nAi,j=1(i=1,2,,4n) restrain each terminal from establishing connection with at most one other terminal. The Lapjv algorithm (Volgenant, 1996) is utilized to solve this optimization problem and the shapes of individual neurites in block images are formed. Furthermore, we employ the region growing method to generate skeletons from the reconstructed shape, achieving the neurites reconstruction from individual image blocks.

Minimal information flow tree for revising the reconstruction

The minimal information flow tree model is designed to modify the topology of skeletons, eliminate incorrect connections, and decompose them into multiple branches. When given an input skeleton file such as the swc file (Cannon et al., 1998), we convert it into a binary tree structure with the following steps.

Step 1

Request a detailed protocol

select the neurite skeleton S1. S1 has the largest length in the neurite skeletons that connect with each other. One of its terminal nodes is recorded as the head node n1.

Step 2

Request a detailed protocol

generate the initial tree structure. Starting at head node n1, search the linking nodes along the skeleton S1, denoted by n1s1,n2s1,,nk1s1. The topology structure is nileftnode=ni+1s1.

Step 3

Request a detailed protocol

generate new structure induced by the linking node n1s1. n1s1 is regarded as the head node and its corresponding neurite skeleton is denoted by S2. Let n1s2,n2s2,,nk2s2 represent the linking nodes in skeleton S2. The corresponding topology structure is n1s1rightnode=n1s2, nis2leftnode=ni+1s2.

Step 4

Request a detailed protocol

repeat the operation in Step 3 for dealing with the linking nodes n2s1,,nk1s1. The corresponding topology structures are added into the total tree structure. After obtaining the tree structures induced by linking nodes in S1, use the operation in Step 3 to generate the tree structures induced by linking nodes in S2. Continue in this manner until all linking nodes have been processed.

To gain a better understanding of the above process, we have provided a demonstration of how to generate the corresponding binary tree from the skeletons of neurites (Figure 1—figure supplement 4).

For the skeletons of neurites in an image block, the corresponding number of binary tree structures will be generated. We use the MIFT model to merge or split these binary structures. Suppose that an image stack contains m skeletons all of which have K nodes, denoted by n1,,nK1,nK. The connections among these nodes are stored in a matrix W with K×K elements. Wi,j=0 indicates that there is no connection between node i and node j. Wi,j=1 indicates that jheadnode=i, Wi,j=2 indicates that jleftnode=i, Wi,j=3 indicates that jrightnode=i.

The information flow can be computed as follows:

(52) W=argminWi=1Kf(W,ni)
(53) f(W,ni)=cos(θ(niheadnode,ni,nileftnode))

Here, the optimization objective function in Equation 53 is called information flow. θ() is the angle between flow from niheadnode to ni and flow from ni to nileftnode. To minimize the optimization problem while ensuring that the topology matrix W does not exhibit abnormal values, we adopt the strategy of dynamic programming to update the topology matrix W. Briefly, we calculate the other two possible angles θ(niheadnode,ni,nirightnode) and θ(nileftnode,ni,nirightnode) at the first linking node ni. The minimum information flow is selected, and W is updated. Following the updated W, the next branching node is found and information flow and W is updated. The updating process iterates until all nodes are updated. The final root nodes {r1,r2,...,rm} are obtained (node satisfies W(rt,i)=0or1(i=1,...n) is set root node). The pseudo-code for solving the optimization problem is provided below:

Algorithm 3. Generation of Minimal Information Flow Tree.
# Graph defines tree topology of the nodes, t_node->left represents the left child node of t_node, t_node->right represents the right child node of t_node, t_node->head represents the head node of t_node.
Input: N: {N0,N1,...,Nk}, Graph head: {N0}
    Set={N0}
    While |Set|>0:
      t_node=Set[0]
      # calculate three possible information flow
      res=calc_three_directions(t_node)
      if (res==0):
      # maintain original structure.
        Set[0]=t_node>left
       Set.push_back(t_node>right)
      if (res==1):
      # change the position of t_node’s two child nodes.
       Exchange_child(t_node)
       Set[0]=t_node>left
       Set.push_back(t_node>right)
      if (res==2):
      # Information flows from t_node->left to t_node->right, update the structure along t_node->left and t_node->head, generate new head if possible.
        New_node=Reverse_head(t_node)
        Set[0]=New_node
Output: N: {N0,N1,...,Nk}, Graph head: {N0,N1,...,Nm}.

Please note that the model has the capability to merge binary trees. When two branches of neurites have identifiable root nodes, and one root node is in close proximity to the skeleton points on the other branch of neurites, the root node does not contribute to the calculation of information flow without fusion. However, after fusion, the root node becomes a linking node in the other branch of neurites, resulting in an additional negative information flow value. In this merging process, a threshold is required to be set. When the minimum distance between the root node of a branch of neurites and the skeleton point of the other branch of neurites is less than 8 for individual image blocks or less than 8,12,16 for fused image blocks respectively, these two branches are merged. When splitting a branch of neurites, the minimal information flow tree model is also applied to both individual and fused image blocks.

The fusion of neurites reconstruction

Request a detailed protocol

By using the MIFT model to revise the neurites reconstruction in individual image blocks, the root nodes and leaf nodes of a branch of neurites can be extracted directly. Here, we use a 0–1 assignment model to merge the reconstructions between two adjacent image blocks. For two adjacent image blocks P and Q, the neurite skeleton nodes which locate near the common boundary are extracted as {p1,p2,...pm}, {q1,q2,...qn} and the cost matrix is constructed as follows:

(54) C=[c(p1,q1)c(p1,qn)c(pm,q1)c(pm,qn)Dm×mDn×nDn×m](m+n)×(m+n)
(55) c(pi,qj)=d(pi,qj)×(2θ(L(pi),L(qj)))

Here, Dm×m, Dn×n, Dn×m are auxiliary matrix which the values are all set 20. d(pi,qj) represents the Euclidean distance between terminal pi and qj. L(pi) and L(qj) are fitted lines from the skeleton points near pi and qj. θ(L(pi),L(qj)) represents the cosine value of their angle. Thus, the 0–1 assignment problem is formed as follows:

(56) A=argminAi=1m+nj=1m+nAi,jCi,j
(57) s.t.i=1m+nAi,j=1(j=1,2,m+n)
(58) j=1m+nAi,j=1(i=1,2,m+n)

Here, A represents the connectivity relationship between nodes, if Ai,j=1, there is connection between block P’s node i and block Q’s node j, if Ai,j=0, there is no connection between block P’s node i and block Q’s node j. i=1m+nAi,j=1(j=1,2,m+n) and j=1m+nAi,j=1(i=1,2,m+n) restrict each node to connect to one other node at most. With the solved matrix A, the neurite skeletons of adjacent blocks can be merged and fused skeleton structures can be obtained.

Statistical analysis

Request a detailed protocol

In this study, three commonly used metrics defined in Quan et al., 2016 were used, including precision, recall, and f1-score, which are computed to measure the fidelity between the reconstruction results and the ground truth. They are defined as follows:

(59) precision(R,G)=|RG||R|=|TP||R|
(60) recall(R,G)=|RG||G|=|TP||G|
(61) f1score(R,G)=2precision×recallprecision+recall

R represents the point set of reconstructed neurons, G represents the point set of the ground truth, || represents the number of points of a set. The three metrics are first computed on each individual neuron and then averaged by weighting each neuron with its point number of its ground truth neuritis.

We also calculated the signal-to-ratio (SNR) of the data using the following method: For a given data block B and its corresponding ground-truth skeleton S, we first densify the skeleton S by using linear interpolation to ensure that the Euclidean distance between adjacent skeleton points is less than 1 voxel. Next, we expand each skeleton point in the densified skeleton S` into a spherical mask with a radius of 3 voxels. The resulting region serves as the foreground mask. Finally, SNR is computed with mean intensity of foreground points and standard deviation of background points as follows:

(62) Meanforeground=xBI(x)×σ1(x)/xBσ1(x)
(63) Meanbackground=xBI(x)×σ2(x)/xBσ2(x)
(64) Stdbackground=xB(I(x)Meanbackground)2×σ2(x)/xBσ2(x)
(65) σ1(x)={1if(xS)0if(xS)
(66) σ2(x)={1if(xmask)0if(xmask)

Here, I(x) represents the signal intensity of the voxel at position x, the SNR is calculated by Meanforeground and Stdbackground by the following formula:

(67) SNR=10log10(Meanforeground/Stdbackground)

Data availability

The data for Figure 1C, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 6—figure supplement 2 is available in https://zenodo.org/records/15589145. The training code of the segmentation network is available on Github: https://github.com/FateUBW0227/Seg_Net (copy archived at Cai, 2025). The software of PointTree and its user guideline are available at https://zenodo.org/records/15589145.

The following data sets were generated
    1. Lin C
    (2025) Zenodo
    Dataset for PointTree: Automatic and accurate reconstruction of long-range axonal projections of single-neuron.
    https://doi.org/10.5281/zenodo.15589145

References

  1. Conference
    1. Çiçek Ö
    2. Abdulkadir A
    3. Lienkamp SS
    4. Brox T
    5. Ronneberger O
    (2016)
    3D U-Net: learning dense volumetric segmentation from sparse annotation
    Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference.
  2. Book
    1. Reynolds DA
    (2009) Gaussian mixture models
    In: Li SZ, Jain A, editors. Encyclopedia of Biometrics. Springer. pp. 659–663.
    https://doi.org/10.1007/978-0-387-73003-5_196

Article and author information

Author details

  1. Lin Cai

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Validation, Investigation, Methodology, Writing – original draft, Writing – review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4413-3599
  2. Taiyu Fan

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Validation, Investigation, Writing – original draft
    Competing interests
    No competing interests declared
  3. Xuzhong Qu

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Software, Investigation
    Competing interests
    No competing interests declared
  4. Ying Zhang

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Validation
    Competing interests
    No competing interests declared
  5. Xianyu Gou

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Investigation
    Competing interests
    No competing interests declared
  6. Quanwei Ding

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    3. School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, China
    Contribution
    Validation
    Competing interests
    No competing interests declared
  7. Weihua Feng

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Investigation
    Competing interests
    No competing interests declared
  8. Tingting Cao

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Validation
    Competing interests
    No competing interests declared
  9. Xiaohua Lv

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Supervision
    Competing interests
    No competing interests declared
  10. Xiuli Liu

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Supervision
    Competing interests
    No competing interests declared
  11. Qing Huang

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    3. School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, China
    Contribution
    Supervision
    Competing interests
    No competing interests declared
  12. Tingwei Quan

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Supervision, Funding acquisition, Investigation, Methodology, Writing – original draft, Writing – review and editing
    For correspondence
    quantingwei@hust.edu.cn
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8393-4292
  13. Shaoqun Zeng

    1. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    2. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
    Contribution
    Supervision
    Competing interests
    No competing interests declared

Funding

National Natural Science Foundation of China (32471146)

  • Tingwei Quan

PLA General Hospital (N20240194)

  • Tingwei Quan

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We thank the members of the Britton Chance Center for Biomedical Photonics for advice and help in experiments. This work was supported by the National Natural Science Foundation of China (32471146) and the project N20240194.

Ethics

All animal experiments followed procedures approved by the Institutional Animal Ethics Committee of the Huazhong University of Science and Technology.

Version history

  1. Sent for peer review:
  2. Preprint posted:
  3. Reviewed Preprint version 1:
  4. Reviewed Preprint version 2:
  5. Version of Record published:

Cite all versions

You can cite all versions using the DOI https://doi.org/10.7554/eLife.102840. This DOI represents all versions, and will always resolve to the latest one.

Copyright

© 2024, Cai et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,275
    views
  • 32
    downloads
  • 1
    citation

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Citations by DOI

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Lin Cai
  2. Taiyu Fan
  3. Xuzhong Qu
  4. Ying Zhang
  5. Xianyu Gou
  6. Quanwei Ding
  7. Weihua Feng
  8. Tingting Cao
  9. Xiaohua Lv
  10. Xiuli Liu
  11. Qing Huang
  12. Tingwei Quan
  13. Shaoqun Zeng
(2025)
Automatic and accurate reconstruction of long-range axonal projections of single-neuron in mouse brain
eLife 13:RP102840.
https://doi.org/10.7554/eLife.102840.3

Share this article

https://doi.org/10.7554/eLife.102840