Back

Clinical Acceptability of Automatically Generated Lymph Node Levels and Structures of Deglutition and Mastication for Head and Neck Cancer Patient Radiation Treatment Planning

Maroongroge, S.; Mohamed, A. S. R.; Nguyen, C.; Guma-De La Vega, J.; Frank, S. J.; Garden, A. S.; Gunn, B. B.; Lee, A.; Mayo, L. L.; Moreno, A. C.; Morrison, W. H.; Phan, J.; Spiotto, M. T.; Court, L. E.; Fuller, C. D.; Rosenthal, D.; Netherton, T. J.

2023-08-09 radiology and imaging
10.1101/2023.08.07.23293787 medRxiv
Show abstract

Purpose/Objective(s)Here we investigate an approach to develop and clinically validate auto-contouring models for lymph node levels and structures of deglutition and mastication in the head and neck. An objective of this work is to provide high quality resources to the scientific community to promote advancement of treatment planning, clinical trial management, and toxicity studies for the head and neck. Materials/MethodsCTs of 145 patients who were irradiated for a head and neck primary malignancy at MD Anderson Cancer Center were retrospectively curated. Data were contoured by radiation oncologists and a resident physician and divided into two separate cohorts. One cohort was used to analyze lymph node levels (IA, IB, II, III, IV, V, RP) and the other used to analyze 17 swallowing and chewing structures. Forty-seven patients were in the lymph node level cohort (training/testing = 32/15). All these patients received definitive radiotherapy without a nodal dissection to minimize anatomic perturbation of the lymph node levels. The remaining 98 patients formed the swallowing/chewing structures cohort (training/testing =78/20). Separate nnUnet models were trained and validated using the separate cohorts. For the lymph node levels, two double blinded studies were used to score preference and clinical acceptability (using a 5-point Likert scale) of AI vs human contours. For the swallowing and chewing structures, clinical acceptability was scored. Quantitative analyses of the test sets were performed for AI vs human contours for all structures using the Dice Similarity Coefficient (DSC) and the 95208 percentile Hausdorff distance (HD95th). ResultsAcross all lymph node levels (IA, IB, II, III, IV, V, RP), median DSC ranged from 0.77 to 0.89 for AI vs manual contours in the testing cohort. Across all lymph node levels, the AI contour was superior to or equally preferred to the manual contours at rates ranging from 75% to 91% in the first blinded study. In the second blinded study, physician preference for the manual vs AI contour was statistically different for only the RP contours (p < 0.01). Thus, there was not a significant difference in clinical acceptability for nodal levels I-V for manual versus AI contours. Across all physician-generated contours, 82% were rated as usable with stylistic to no edits, and across all AI-generated contours, 92% were rated as usable with stylistic to no edits. For the swallowing structures median DSC ranged from 0.86 to 0.96 and was greater than 0.90 for 11/17 structures types. Of the 340 contours in the test set, only 4% required minor edits. ConclusionsAn approach to generate clinically acceptable automated contours for lymph node levels and swallowing and chewing structures in the head and neck was demonstrated. For nodal levels I-V, there was no significant difference in clinical acceptability in manual vs AI contours. Of the two testing cohorts for lymph nodes and swallowing and chewing structures, only 8% and 4% of structures required minor edits, respectively. All testing and training data are being made publicly available on The Cancer Imaging Archive.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Medical Physics
14 papers in training set
Top 0.1%
21.9%
2
International Journal of Radiation Oncology*Biology*Physics
21 papers in training set
Top 0.1%
17.8%
3
Radiotherapy and Oncology
18 papers in training set
Top 0.1%
9.8%
4
JCO Clinical Cancer Informatics
18 papers in training set
Top 0.1%
6.6%
50% of probability mass above
5
Scientific Reports
3102 papers in training set
Top 20%
6.2%
6
Physics in Medicine & Biology
17 papers in training set
Top 0.1%
4.2%
7
PLOS ONE
4510 papers in training set
Top 35%
4.0%
8
Frontiers in Oncology
95 papers in training set
Top 1%
3.9%
9
Scientific Data
174 papers in training set
Top 0.5%
3.5%
10
European Radiology
14 papers in training set
Top 0.3%
2.0%
11
npj Precision Oncology
48 papers in training set
Top 0.6%
1.6%
12
European Journal of Nuclear Medicine and Molecular Imaging
19 papers in training set
Top 0.2%
1.6%
13
Journal of Medical Imaging
11 papers in training set
Top 0.1%
1.6%
14
JAMA Network Open
127 papers in training set
Top 3%
1.4%
15
Archives of Clinical and Biomedical Research
28 papers in training set
Top 1%
1.2%
16
Diagnostics
48 papers in training set
Top 2%
0.9%
17
Neuro-Oncology Advances
24 papers in training set
Top 0.4%
0.9%
18
Computer Methods and Programs in Biomedicine
27 papers in training set
Top 0.9%
0.8%
19
Nature Communications
4913 papers in training set
Top 62%
0.8%
20
Journal for ImmunoTherapy of Cancer
64 papers in training set
Top 1%
0.7%
21
Annals of Translational Medicine
17 papers in training set
Top 1%
0.7%
22
Cancers
200 papers in training set
Top 5%
0.7%