Back

DeepBranchAI: A Novel Cascade Workflow Enabling Accessible 3D Branching Network Segmentation

Maltsev, A. V.; Hartnell, L.; Ferrucci, L.

2026-03-29 bioinformatics
10.64898/2026.03.25.714249 bioRxiv
Show abstract

Three-dimensional branching networks exist throughout biological, natural, and man-made systems as pathways through volumetric space. Segmentation is required to correctly reconstruct the networks in whole or in part for analysis. This presents a unique challenge as minor voxel misclassifications can cause sporadic connectivity shifts, whereby connected elements appear to disconnect (false negatives) or to even become amplified (false positives). Addressing this topological vulnerability requires the generation of 3D models since 2D slice-by-slice approaches cannot maintain connectivity across x, y, and z axes. Yet tracking 3D architecture demands substantially more analytical resources than using a 2D strategy as generating volumetric annotations requires extraordinary amounts of expert time to manually annotate. This creates a fundamental annotation bottleneck: with sparse training data available, deep learning models tend to overfit available volumes and fail to generalize to novel volumes. We present a cascade training workflow that overcomes this bottleneck through a positive feedback loop in which trained models become annotation aids for subsequent volumes. The workflow begins with random forests that generate initial drafts from minimal labels, followed by expert refinement that cycle ever closer to the ground truth. As refined data accumulates, training transitions from 2D to 3D architectures, which systematically expand sparse datasets into comprehensive training sets. The outcome is a 3D nnU-Net model optimized for topology-preserving segmentation. We dub our resulting model DeepBranchAI. Training validation on heavily branching mitochondrial networks, generated by focused ion beam scanning electron microscopy (FIB-SEM, 15nm voxel resolution) achieved Dice Similarity Coefficient (DSC) = 0.942 across 5-fold cross-validation. Transfer learning to vascular networks (VESSEL12 dataset, CT volumes, 30,000-fold voxel size difference) training on as little as 10% of target data achieved 97.05% accuracy against ground truth, validating that learned features represent domain-general topological principles. This workflow reduces annotation time from months to weeks while transforming sparse initial labels into robust training sets. Complete implementation, trained weights, and validation code are provided open source.

Matching journals

The top 3 journals account for 50% of the predicted probability mass.

1
Nature Methods
336 papers in training set
Top 0.1%
38.2%
2
Nature Communications
4913 papers in training set
Top 24%
7.3%
3
Nature Biotechnology
147 papers in training set
Top 1%
6.5%
50% of probability mass above
4
Nature
575 papers in training set
Top 6%
4.0%
5
Bioinformatics
1061 papers in training set
Top 5%
3.7%
6
Science
429 papers in training set
Top 8%
3.6%
7
Advanced Science
249 papers in training set
Top 8%
2.4%
8
Communications Biology
886 papers in training set
Top 6%
1.9%
9
Cell Systems
167 papers in training set
Top 6%
1.8%
10
Journal of Structural Biology
58 papers in training set
Top 0.7%
1.7%
11
Development
440 papers in training set
Top 2%
1.7%
12
Scientific Reports
3102 papers in training set
Top 59%
1.7%
13
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 34%
1.5%
14
Nature Machine Intelligence
61 papers in training set
Top 2%
1.3%
15
Nature Computational Science
50 papers in training set
Top 0.9%
1.2%
16
Cell
370 papers in training set
Top 14%
1.0%
17
PLOS ONE
4510 papers in training set
Top 64%
0.9%
18
Patterns
70 papers in training set
Top 2%
0.9%
19
eLife
5422 papers in training set
Top 53%
0.9%
20
Nature Biomedical Engineering
42 papers in training set
Top 2%
0.8%
21
iScience
1063 papers in training set
Top 34%
0.7%
22
Genome Biology
555 papers in training set
Top 8%
0.7%
23
Briefings in Bioinformatics
326 papers in training set
Top 8%
0.5%
24
Cell Reports Methods
141 papers in training set
Top 7%
0.5%
25
Nature Medicine
117 papers in training set
Top 6%
0.5%