Back

Pretraining Diversity and Clinical Metric Optimization Achieve State-of-the-Art Performance on ChestX-ray14

Fisher, G. R.

2025-10-27 radiology and imaging
10.1101/2025.10.25.25338784 medRxiv
Show abstract

We achieved state-of-the-art performance on the NIH ChestX-ray14 multi-label classification task using a simple 3-model ensemble: mean ROC-AUC 0.940, F1 0.821 (95% CI: 0.799-0.845), PR-AUC 0.827, sensitivity 76.0%, and specificity 98.8% across 14 thoracic diseases. Our primary finding challenges current research priorities: pretraining diversity dominates architectural diversity. Systematic evaluation of 255 ensemble combinations from 8 models spanning three architecture families (ConvNeXt, Vision Transformers, EfficientNet) at multiple resolutions (224x224 to 384x384) revealed that a simple 3-model ConvNeXt ensemble combining ImageNet-1K, ImageNet-21K, and ImageNet-21K-384 pretrained variants outperformed all 252 alternative combinations, including modern Vision Transformers and efficiency-optimized architectures. This ensemble achieved mean ROC-AUC 0.940, exceeding recent hybrid transformer approaches (LongMaxViT [1]: 0.932) with substantially lower computational requirements. Systematic comparison of five optimization strategies (F1, F_SS, pure sensitivity, Youdens J, validation loss) established that clinical metric optimization outperforms traditional validation loss by 19.5% in F1 score. F_SS optimization (sensitivity-specificity harmonic mean) achieved optimal clinical balance: highest sensitivity (73.9%), best Youdens J (0.727), and superior threshold-independent performance (ROC-AUC, PR-AUC). Traditional validation loss optimization failed to align with diagnostic utility despite achieving mathematical convergence. Strategic pretraining selection and clinical metric optimization provide greater performance improvements than architectural innovation alone, enabling competitive state-of-the-art results on accessible computational resources (AWS g5.2xlarge, $1.21/hr).

Matching journals

The top 2 journals account for 50% of the predicted probability mass.

1
Nature Machine Intelligence
61 papers in training set
Top 0.1%
32.4%
2
Nature Medicine
117 papers in training set
Top 0.1%
18.3%
50% of probability mass above
3
Nature Communications
4913 papers in training set
Top 19%
9.9%
4
The Lancet Digital Health
25 papers in training set
Top 0.1%
4.8%
5
Scientific Reports
3102 papers in training set
Top 28%
4.2%
6
npj Digital Medicine
97 papers in training set
Top 1%
3.6%
7
Patterns
70 papers in training set
Top 0.3%
3.5%
8
Communications Medicine
85 papers in training set
Top 0.1%
2.6%
9
eBioMedicine
130 papers in training set
Top 0.8%
2.0%
10
JCO Clinical Cancer Informatics
18 papers in training set
Top 0.5%
1.7%
11
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 37%
1.3%
12
Nature Computational Science
50 papers in training set
Top 1%
0.9%
13
PLOS ONE
4510 papers in training set
Top 65%
0.9%
14
PLOS Digital Health
91 papers in training set
Top 2%
0.9%
15
Nature Methods
336 papers in training set
Top 6%
0.8%
16
Nature
575 papers in training set
Top 15%
0.8%
17
Nature Biomedical Engineering
42 papers in training set
Top 2%
0.7%
18
EBioMedicine
39 papers in training set
Top 1%
0.7%
19
Journal of Medical Imaging
11 papers in training set
Top 0.4%
0.7%
20
JAMIA Open
37 papers in training set
Top 2%
0.7%
21
Frontiers in Bioinformatics
45 papers in training set
Top 1%
0.7%
22
Journal of Pathology Informatics
13 papers in training set
Top 0.4%
0.7%