OpenEvidence errs on the safe side in a structured test of triage recommendations
Jia, E.; Omar, M.; Barash, Y.; Brook, O. R.; Ahmed, M.; Kruskal, J. B.; Gorenshtein, A.; Klang, E.
Show abstract
Ramaswamy et al. recently reported in Nature Medicine that ChatGPT Health, a consumer-facing health AI tool, undertriaged 51.6% of true emergencies. It was also susceptible to social anchoring in a structured stress test of triage recommendations. We applied the same vignette-based benchmark to OpenEvidence, a widely used physician-facing AI platform for clinical decision support. The benchmark included 960 prompts across 21 clinical domains (Supplementary Table S3). OpenEvidence undertriaged 12.5% of emergencies, a four-fold reduction relative to ChatGPT Health. It also showed no anchoring effect. Its errors skewed in a safer direction, including 68.0% overtriage of Home presentations. In 65 of 960 responses (6.8%), it declined to assign a triage level. These refusals occurred only in symptom-only prompts and never in urgent or emergency cases. Performance improved when objective clinical data were provided. Under the same benchmark, a widely used physician-facing system showed a different safety profile from a consumer-facing one. This suggests that who a health AI is built for can shape how it fails.
Matching journals
The top 5 journals account for 50% of the predicted probability mass.