Back

Evaluating the AI Potential as a Safety Net for Diagnosis: A Novel Benchmark of Large Language Models in Correcting Diagnostic Errors

Hassoon, A.; Peng, X.; Irimia, R.; Lianjie, A.; Leo, H.; Bandeira, A.; Woo, H. Y.; Dredze, M.; Abdulnour, R.-E.; McDonald, K. M.; Peterson, S.; Newman-Toker, D.

2026-02-24 health systems and quality improvement
10.64898/2026.02.22.26346832 medRxiv
Show abstract

BackgroundDiagnostic errors are a leading cause of preventable patient harm, often occurring during early clinical encounters where diagnostic uncertainty is maximal. Large language models (LLMs) have shown potential in medical reasoning, yet their ability to function as a diagnostic safety net, specifically by identifying and correcting human diagnostic errors, remains systematically unquantified. We evaluated whether state-of-the-art LLMs can effectively challenge, rather than merely confirm, an erroneous physician diagnosis. MethodsWe evaluated 16 leading LLMs (including GPT-o1, Gemini 2.5 Pro, and Claude 3.7 Sonnet) using 200 standardized clinical vignettes representing 20 high-stakes, frequently misdiagnosed conditions. Models were presented with the full clinical record and an incorrect physician diagnosis. Primary outcomes included the diagnostic correction rate (disagreeing with the error and providing the correct diagnosis) and the ratio of correction to error detection. We further tested model robustness by generating 2,200 variants to assess the influence of demographic (race/ethnicity) and contextual (institutional reputation, training level, insurance) tokens. ResultsDiagnostic correction rates varied significantly across models. Gemini 2.5 Pro demonstrated the highest performance, correcting the physicians error in 55.0% of cases (n=110/200), followed by Claude Sonnet 3.5 (48.5%) and Sonnet 4 (47.0%). In contrast, DeepSeek V3 corrected only 20.0% of cases. Performance was strikingly consistent at the disease level; most models failed to correct errors in syphilis, spinal epidural abscess, and myocardial infarction. Furthermore, several models exhibited confirmation bias (agreeing with the incorrect diagnosis) occurring in 11.0% to 50.0% of cases. Stability across demographic and contextual variants was inconsistent, with some models showing spurious performance shifts based on non-clinical tokens. ConclusionWhile top-performing LLMs can intercept approximately half of the human diagnostic errors in high-stakes scenarios, performance is heterogeneous and highly sensitive to non-clinical context. Current models exhibit significant disease-specific gaps and a tendency toward confirmation bias, suggesting that their safe clinical integration requires adversarial, multi-agent workflows designed to prioritize skepticism over baseline agreement.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
npj Digital Medicine
97 papers in training set
Top 0.3%
17.3%
2
Nature Medicine
117 papers in training set
Top 0.1%
12.2%
3
Nature
575 papers in training set
Top 3%
8.3%
4
Medical Decision Making
10 papers in training set
Top 0.1%
7.1%
5
PLOS ONE
4510 papers in training set
Top 28%
6.3%
50% of probability mass above
6
Communications Medicine
85 papers in training set
Top 0.1%
6.2%
7
Scientific Reports
3102 papers in training set
Top 35%
3.6%
8
PLOS Digital Health
91 papers in training set
Top 0.8%
3.2%
9
European Heart Journal - Digital Health
15 papers in training set
Top 0.2%
2.6%
10
BMC Medical Informatics and Decision Making
39 papers in training set
Top 2%
1.7%
11
The Lancet Digital Health
25 papers in training set
Top 0.4%
1.7%
12
Frontiers in Digital Health
20 papers in training set
Top 0.7%
1.7%
13
Nature Human Behaviour
85 papers in training set
Top 3%
1.5%
14
PLOS Biology
408 papers in training set
Top 11%
1.5%
15
Journal of Biomedical Informatics
45 papers in training set
Top 1%
1.2%
16
Journal of the American Medical Informatics Association
61 papers in training set
Top 2%
1.1%
17
BMJ Open
554 papers in training set
Top 11%
0.9%
18
Nature Machine Intelligence
61 papers in training set
Top 3%
0.9%
19
BMJ Health & Care Informatics
13 papers in training set
Top 0.8%
0.9%
20
JAMA Network Open
127 papers in training set
Top 4%
0.7%
21
Computers in Biology and Medicine
120 papers in training set
Top 5%
0.7%
22
BMC Infectious Diseases
118 papers in training set
Top 5%
0.7%
23
iScience
1063 papers in training set
Top 33%
0.7%
24
Frontiers in Public Health
140 papers in training set
Top 8%
0.7%
25
Journal of Personalized Medicine
28 papers in training set
Top 1%
0.7%
26
Healthcare
16 papers in training set
Top 2%
0.7%
27
Canadian Medical Association Journal
15 papers in training set
Top 0.5%
0.6%