Back

Grounded large language models for diagnostic prediction in real-world emergency department settings

Niset, A.; Melot, I.; Pireau, M.; Englebert, A.; Scius, N.; Flament, J.; El Hadwe, S.; Al Barajraji, M.; Thonon, H.; Barrit, S.

2025-02-24 emergency medicine
10.1101/2025.02.23.25322736 medRxiv
Show abstract

BackgroundEmergency departments face increasing pressures from staff shortages, patient surges, and administrative burdens. While large language models (LLMs) show promise in clinical support, their deployment in emergency medicine presents technical and regulatory challenges. Previous studies often relied on simplistic evaluations using public datasets, overlooking real-world complexities and data privacy concerns. MethodsAt a tertiary emergency department, we retrieved 79 consecutive cases during a peak 24-hour period constituting a siloed dataset. We evaluated six pipelines combining open- and closed-source embedding models (text-embedding- ada-002 and MXBAI) with foundational models (GPT-4, Llama3, and Qwen2), grounded through retrieval-augmented generation with emergency medicine textbooks. The models top-five diagnostic predictions on early clinical data were compared against reference diagnoses established through expert consensus based on complete clinical data. Outcomes included diagnostic inclusion rate, ranking performance, and citation sourcing capabilities. ResultsAll pipelines showed comparable diagnostic inclusion rates (62.03-72.15%) without significant differences in pairwise comparisons. Case characteristics, rather than model combinations, significantly influenced predictive diagnostic performance. Cases with specific diagnoses were significantly more diagnosed versus unspecific ones (85.53% vs. 31.41%, p<0.001), as did surgical versus medical cases (79.49% vs. 56.25%, p<0.001). Open-source foundational models demonstrated superior sourcing capabilities compared to GPT-4-based combinations (OR: 33.92 to {infty}, p<1.4e-12), with MBXAI/Qwen2 achieving perfect sourcing. ConclusionOpen and closed-source LLMs showed promising and comparable predictive diagnostic performance in a real-world emergency setting when evaluated on siloed data. Case characteristics emerged as the primary determinant of performance, suggesting that current limitations reflect AI alignment fundamental challenges in medical reasoning rather than model-specific constraints. Open-source models demonstrated superior sourcing capabilities--a critical advantage for interpretability. Continued research exploring larger-scale, multi-centric efforts, including real-time applications and human-computer interactions, as well as real- world clinical benchmarking and sourcing verification, will be key to delineating the full potential of grounded LLM-driven diagnostic assistance in emergency medicine.

Matching journals

The top 6 journals account for 50% of the predicted probability mass.

1
Artificial Intelligence in Medicine
15 papers in training set
Top 0.1%
14.4%
2
International Journal of Medical Informatics
25 papers in training set
Top 0.1%
14.4%
3
PLOS ONE
4510 papers in training set
Top 25%
6.8%
4
Scientific Reports
3102 papers in training set
Top 17%
6.4%
5
Journal of the American Medical Informatics Association
61 papers in training set
Top 0.4%
6.4%
6
PLOS Digital Health
91 papers in training set
Top 0.4%
6.3%
50% of probability mass above
7
npj Digital Medicine
97 papers in training set
Top 0.9%
4.9%
8
Journal of Medical Internet Research
85 papers in training set
Top 1%
4.4%
9
Frontiers in Public Health
140 papers in training set
Top 3%
2.6%
10
Heliyon
146 papers in training set
Top 1%
2.1%
11
Emergency Medicine Journal
20 papers in training set
Top 0.3%
1.7%
12
GigaScience
172 papers in training set
Top 1%
1.7%
13
iScience
1063 papers in training set
Top 14%
1.7%
14
BMC Medical Informatics and Decision Making
39 papers in training set
Top 2%
1.3%
15
Nature Human Behaviour
85 papers in training set
Top 3%
1.2%
16
Computational and Structural Biotechnology Journal
216 papers in training set
Top 7%
1.0%
17
Healthcare
16 papers in training set
Top 1%
0.9%
18
Computers in Biology and Medicine
120 papers in training set
Top 4%
0.9%
19
CMAJ Open
12 papers in training set
Top 0.2%
0.9%
20
JMIR Medical Informatics
17 papers in training set
Top 1%
0.9%
21
Journal of Biomedical Informatics
45 papers in training set
Top 1%
0.8%
22
Cureus
67 papers in training set
Top 5%
0.8%
23
Annals of Translational Medicine
17 papers in training set
Top 1%
0.8%
24
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 2%
0.7%
25
JAMIA Open
37 papers in training set
Top 1%
0.7%
26
The Lancet Digital Health
25 papers in training set
Top 1%
0.7%
27
BMC Medical Research Methodology
43 papers in training set
Top 1%
0.7%
28
Computer Methods and Programs in Biomedicine
27 papers in training set
Top 1%
0.7%
29
JAMA Network Open
127 papers in training set
Top 5%
0.7%
30
Frontiers in Medicine
113 papers in training set
Top 8%
0.6%