Comparison of Deep Learning Tools for Optic Nerve Axon Quantification Finds Limited Generalizability on Independent Validation
Chuter, B.; Emmert, N.; Kim, M. Y.; Dave, N.; Herrin, J.; Zhou, Z.; Wall, G.; Palmer, A.; Chen, H.; Hollingsworth, T. J.; Jablonski, M. M.
Show abstract
PurposeMachine learning approaches for automated quantification of optic nerve histology have emerged as potential tools for objective assessment of axonal injury in experimental glaucoma models. However, the generalizability of these models to independent datasets remains unclear. Guided by a scoping review of the literature, this study performed independent validation testing of publicly available models on a novel rat optic nerve dataset to assess their generalizability. MethodsWe conducted a scoping review following PRISMA-ScR guidelines. PubMed, EMBASE, Scopus, and Cochrane CENTRAL were searched from 2000 through 2025. Two reviewers independently screened records and extracted data on model characteristics and performance metrics. Additionally, we performed independent validation of three models (AxoNet, AxonDeepSeg, AxoNet 2.0) on a novel rat optic nerve dataset comprising 57 images with 9,514 manually annotated axons. Because AxonDeep is not publicly available, we instead evaluated AxonDeepSeg, a separate publicly available deep learning-based tool that, while not previously applied to optic nerve tissue, is widely used for nerve fiber segmentation. ResultsFrom 2,036 records, four manuscripts describing three deep learning models met inclusion criteria. Published correlation coefficients between model predictions and reference counts ranged from 0.959 to 0.99. On independent validation, performance was reduced: AxoNet 2.0 achieved the highest correlation (r = 0.89), followed by AxonDeepSeg (r = 0.86) and AxoNet (r = 0.79). Segmentation quality metrics revealed high precision (>0.94) but low recall (0.18 to 0.27), with Dice coefficients of 0.29 to 0.40, substantially below published benchmarks of 0.81. ConclusionsDeep learning models for optic nerve histology demonstrate strong within-study performance but show meaningful performance decrements when applied to independent datasets. The observed generalizability gap (correlations 0.07 to 0.182 points below published values) demonstrates the need for standardized validation datasets and multi-center testing before widespread adoption of these tools.
Matching journals
The top 6 journals account for 50% of the predicted probability mass.