Back

Prompt-engineering improves clinical safety of large language models for opioid equipotency conversion

Marton, T.; Corpman, D.; Lai, L.; Gabriel, R. A.; Chen, Y.

2026-05-08 pain medicine
10.64898/2026.05.06.26352590 medRxiv
Show abstract

Background: Large language models (LLMs) are increasingly used in medical education and clinical decision-making, but their reliability in high-risk medication dosing remains unclear. Opioid rotation is a common task requiring precise calculations where errors may result in overdose or inadequate pain relief. Methods: Thirteen LLMs were tested using an API-based framework to ensure independent queries across trials. First, fictional clinical scenarios were tested to simulate real-world clinical situations involving opioid rotation; to test the effects of changes in wording, scenarios were revised into 4 vignettes showing the same clinical situation. Next, opioid pairs were tested with a random-dose paradigm across a clinically-pertinent range (5-120 mg daily morphine equivalents). LLM outputs were compared with expected values derived from reference standards. Accuracy was assessed using predefined safety thresholds: tight accuracy (0.85-1.15x expected dose) and broad accuracy (0.6-1.7x). We tested models naively and with prompts augmented with reference tables and unit explanations. Results: Naive models generally exhibited low tight-range accuracy across opioid pairs. For any given opioid pair, each model would consistently produce similar incorrect conversion ratios despite wide variability across opioid pairs and language models. Vignette wording changes accounted for 76% of within-scenario response variance. Reference-based prompt augmentation significantly improved performance, with over half of models achieving high proportions of conversions within tight accuracy for morphine equivalent conversions. Conclusions: While commercial LLMs demonstrated variable accuracy in the native state, prompt augmentation significantly improved their performance.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Clinical Pharmacology & Therapeutics
25 papers in training set
Top 0.1%
28.9%
2
Frontiers in Digital Health
20 papers in training set
Top 0.1%
10.9%
3
Clinical and Translational Science
21 papers in training set
Top 0.1%
6.7%
4
npj Digital Medicine
97 papers in training set
Top 0.9%
5.1%
50% of probability mass above
5
Journal of Medical Internet Research
85 papers in training set
Top 1%
4.5%
6
Scientific Reports
3102 papers in training set
Top 33%
3.7%
7
JMIR Formative Research
32 papers in training set
Top 0.4%
3.2%
8
Journal of the American Medical Informatics Association
61 papers in training set
Top 0.9%
3.0%
9
PLOS ONE
4510 papers in training set
Top 43%
2.9%
10
BMJ Open Quality
15 papers in training set
Top 0.3%
2.7%
11
PLOS Digital Health
91 papers in training set
Top 1.0%
2.6%
12
International Journal of Medical Informatics
25 papers in training set
Top 0.7%
2.0%
13
BMC Medical Informatics and Decision Making
39 papers in training set
Top 1%
1.8%
14
BMJ Open
554 papers in training set
Top 9%
1.5%
15
JAMA Network Open
127 papers in training set
Top 3%
1.4%
16
JAMIA Open
37 papers in training set
Top 1%
1.3%
17
British Journal of Anaesthesia
14 papers in training set
Top 0.5%
1.3%
18
JMIRx Med
31 papers in training set
Top 1%
1.0%
19
Healthcare
16 papers in training set
Top 1%
0.9%
20
JCO Clinical Cancer Informatics
18 papers in training set
Top 0.7%
0.9%
21
Journal of Biomedical Informatics
45 papers in training set
Top 1%
0.9%
22
Journal of Neuroscience Methods
106 papers in training set
Top 1%
0.9%
23
Frontiers in Pharmacology
100 papers in training set
Top 4%
0.8%
24
Journal of Personalized Medicine
28 papers in training set
Top 1.0%
0.8%
25
Pharmacoepidemiology and Drug Safety
13 papers in training set
Top 0.4%
0.8%
26
Heliyon
146 papers in training set
Top 5%
0.8%
27
BMC Neurology
12 papers in training set
Top 1%
0.5%