Back

The NLP-to-Expert Gap in Chest X-ray AI

2026-03-02 radiology and imaging Title + abstract only
View on medRxiv
Show abstract

In previous work, we achieved state-of-the-art performance on ChestX-ray14 (ROC-AUC 0.940, F1 0.821) using pretraining diversity and clinical metric optimization. Applying the same methodology to CheXpert, we received similar results when using NLP valuation and test data--but when evaluated against expert radiologist labels, performance was only 0.75-0.87 ROC-AUC. The models had learned to match the automated NLP labeling system, not to diagnose disease. This paper documents our investigation ...

Predicted journal destinations