Back

Expert-level pediatric brain tumor segmentation in a limited data scenario with stepwise transfer learning

Boyd, A.; Ye, Z.; Prabhu, S.; Tjong, M.; Zha, Y.; Vajapeyam, S.; Hayat, H.; Chopra, R.; Liu, K.; Nabavizadeh, A.; Resnick, A.; Mueller, S.; Haas-Kogan, D.; Aerts, H.; Poussaint, T.; Kann, B.

2023-06-30 radiology and imaging
10.1101/2023.06.29.23292048
Show abstract

PurposeArtificial intelligence (AI)-automated tumor delineation for pediatric gliomas would enable real-time volumetric evaluation to support diagnosis, treatment response assessment, and clinical decision-making. Auto-segmentation algorithms for pediatric tumors are rare, due to limited data availability, and algorithms have yet to demonstrate clinical translation. MethodsWe leveraged two datasets from a national brain tumor consortium (n=184) and a pediatric cancer center (n=100) to develop, externally validate, and clinically benchmark deep learning neural networks for pediatric low-grade glioma (pLGG) segmentation using a novel in-domain, stepwise transfer learning approach. The best model [via Dice similarity coefficient (DSC)] was externally validated and subject to randomized, blinded evaluation by three expert clinicians wherein clinicians assessed clinical acceptability of expert- and AI-generated segmentations via 10-point Likert scales and Turing tests. ResultsThe best AI model utilized in-domain, stepwise transfer learning (median DSC: 0.877 [IQR 0.715-0.914]) versus baseline model (median DSC 0.812 [IQR 0.559-0.888]; p<0.05). On external testing (n=60), the AI model yielded accuracy comparable to inter-expert agreement (median DSC: 0.834 [IQR 0.726-0.901] vs. 0.861 [IQR 0.795-0.905], p=0.13). On clinical benchmarking (n=100 scans, 300 segmentations from 3 experts), the experts rated the AI model higher on average compared to other experts (median Likert rating: 9 [IQR 7-9]) vs. 7 [IQR 7-9], p<0.05 for each). Additionally, the AI segmentations had significantly higher (p<0.05) overall acceptability compared to experts on average (80.2% vs. 65.4%). Experts correctly predicted the origins of AI segmentations in an average of 26.0% of cases. ConclusionsStepwise transfer learning enabled expert-level, automated pediatric brain tumor auto-segmentation and volumetric measurement with a high level of clinical acceptability. This approach may enable development and translation of AI imaging segmentation algorithms in limited data scenarios. SummaryAuthors proposed and utilized a novel stepwise transfer learning approach to develop and externally validate a deep learning auto-segmentation model for pediatric low-grade glioma whose performance and clinical acceptability were on par with pediatric neuroradiologists and radiation oncologists. Key PointsO_LIThere are limited imaging data available to train deep learning tumor segmentation for pediatric brain tumors, and adult-centric models generalize poorly in the pediatric setting. C_LIO_LIStepwise transfer learning demonstrated gains in deep learning segmentation performance (Dice score: 0.877 [IQR 0.715-0.914]) compared to other methodologies and yielded segmentation accuracy comparable to human experts on external validation. C_LIO_LIOn blinded clinical acceptability testing, the model received higher average Likert score rating and clinical acceptability compared to other experts (Transfer-Encoder model vs. average expert: 80.2% vs. 65.4%) C_LIO_LITuring tests showed uniformly low ability of experts ability to correctly identify the origins of Transfer-Encoder model segmentations as AI-generated versus human-generated (mean accuracy: 26%). C_LI

Matching journals

The top 3 journals account for 50% of the predicted probability mass.

1
Neuro-Oncology Advances
based on 14 papers
Top 0.1%
40.8%
2
European Radiology
based on 11 papers
Top 0.6%
6.7%
3
Scientific Reports
based on 701 papers
Top 32%
5.6%
50% of probability mass above
4
Radiotherapy and Oncology
based on 11 papers
Top 0.6%
3.1%
5
NeuroImage: Clinical
based on 77 papers
Top 4%
2.6%
6
NeuroImage
based on 36 papers
Top 2%
2.4%
7
Nature Communications
based on 483 papers
Top 25%
2.4%
8
PLOS ONE
based on 1737 papers
Top 84%
2.4%
9
Human Brain Mapping
based on 53 papers
Top 4%
2.0%
10
Scientific Data
based on 30 papers
Top 1%
1.9%
11
Frontiers in Oncology
based on 34 papers
Top 4%
1.4%
12
PLOS Digital Health
based on 88 papers
Top 9%
1.4%
13
Informatics in Medicine Unlocked
based on 11 papers
Top 2%
1.3%
14
Clinical Cancer Research
based on 22 papers
Top 4%
0.9%
15
Journal of the American Medical Informatics Association
based on 53 papers
Top 6%
0.8%
16
JCO Clinical Cancer Informatics
based on 14 papers
Top 3%
0.8%
17
Journal of Magnetic Resonance Imaging
based on 10 papers
Top 2%
0.8%
18
npj Digital Medicine
based on 85 papers
Top 12%
0.8%
19
The Lancet Digital Health
based on 25 papers
Top 4%
0.8%
20
Stroke: Vascular and Interventional Neurology
based on 12 papers
Top 1%
0.8%
21
npj Precision Oncology
based on 14 papers
Top 3%
0.8%
22
BMJ Open
based on 553 papers
Top 52%
0.7%
23
eBioMedicine
based on 82 papers
Top 8%
0.7%
24
JAMA Network Open
based on 125 papers
Top 20%
0.7%
25
International Journal of Radiation Oncology*Biology*Physics
based on 13 papers
Top 3%
0.7%