Back

Show Your Work: Verbatim Evidence Requirements and Automated Assessment for Large Language Models in Biomedical Text Processing

Windisch, P.; Weyrich, J.; Dennstaedt, F.; Zwahlen, D. R.; Foerster, R.; Schroeder, C.

2026-03-04 health informatics
10.64898/2026.03.03.26346690 medRxiv
Show abstract

PurposeLarge language models (LLMs) are used for biomedical text processing, but individual decisions are often hard to audit. We evaluated whether enforcing a mechanically checkable "show your work" quote affects accuracy, stability, and verifiability for trial eligibility-scope classification from abstracts. MethodsWe used 200 oncology randomized controlled trials (2005 - 2023) and provided models with only the title and abstract. Trials were labeled with whether they allowed for the inclusion of patients with localized and/or metastatic disease. Three flagship models (GPT-5.2, Gemini 3 Flash, Claude Opus 4.5) were queried with default settings in two independent conditions: label-only and label plus a verbatim supporting quote. Models could abstain if they deemed the abstract to not contain sufficient information. Each condition was repeated three times per abstract. Quotes were mechanically validated as exact substrings after whitespace normalization, and a separate judge step used an LLM to rate whether each quote supported the assigned label. ResultsEvidence requirements modestly reduced coverage (GPT-5.2 86.2% to 84.3%, Gemini 98.3% to 92.8%, Claude 96.0% to 94.5%) by increasing abstentions and, for Gemini, invalid outputs. Conditional macro-F1 remained high but changed by model (slight gains for GPT-5.2 and Gemini, decrease for Claude). Labels were stable across repetitions (Fleiss kappa 0.829 to 0.969). Mechanically valid quotes occurred in 83.3% to 91.2% of runs, yet only 48.0% to 78.8% of evidence-bearing predictions were judged semantically supported. Restricting to supported predictions increased macro-F1 at the cost of lower coverage. ConclusionSubstring-verifiable quotes provide an automated audit trail and enable selective, higher-trust automation when applying LLMs to biomedical text processing. However, this approach introduces new failure modes and trades coverage for verifiability in a model-dependent way.

Matching journals

The top 2 journals account for 50% of the predicted probability mass.

1
JCO Clinical Cancer Informatics
18 papers in training set
Top 0.1%
33.9%
2
Journal of the American Medical Informatics Association
61 papers in training set
Top 0.1%
19.1%
50% of probability mass above
3
Bioinformatics
1061 papers in training set
Top 6%
3.3%
4
The Lancet Digital Health
25 papers in training set
Top 0.2%
3.0%
5
BMC Medical Research Methodology
43 papers in training set
Top 0.4%
2.7%
6
npj Digital Medicine
97 papers in training set
Top 2%
2.2%
7
Journal of Clinical Epidemiology
28 papers in training set
Top 0.2%
2.1%
8
BMC Bioinformatics
383 papers in training set
Top 4%
1.9%
9
PLOS ONE
4510 papers in training set
Top 52%
1.7%
10
Artificial Intelligence in Medicine
15 papers in training set
Top 0.3%
1.7%
11
Journal of Biomedical Informatics
45 papers in training set
Top 0.8%
1.7%
12
BMC Medical Informatics and Decision Making
39 papers in training set
Top 2%
1.5%
13
Scientific Reports
3102 papers in training set
Top 61%
1.5%
14
BMJ Health & Care Informatics
13 papers in training set
Top 0.5%
1.4%
15
Journal of Medical Internet Research
85 papers in training set
Top 4%
1.0%
16
Nature Communications
4913 papers in training set
Top 58%
1.0%
17
Cancer Medicine
24 papers in training set
Top 1%
0.9%
18
Frontiers in Artificial Intelligence
18 papers in training set
Top 0.5%
0.9%
19
PLOS Computational Biology
1633 papers in training set
Top 22%
0.9%
20
International Journal of Medical Informatics
25 papers in training set
Top 1%
0.9%
21
JAMA Network Open
127 papers in training set
Top 4%
0.8%
22
Annals of Internal Medicine
27 papers in training set
Top 0.8%
0.8%
23
JMIR Medical Informatics
17 papers in training set
Top 1%
0.8%
24
Computer Methods and Programs in Biomedicine
27 papers in training set
Top 0.9%
0.8%
25
JAMIA Open
37 papers in training set
Top 1%
0.8%
26
Research Synthesis Methods
20 papers in training set
Top 0.2%
0.8%
27
Frontiers in Digital Health
20 papers in training set
Top 2%
0.7%
28
BMJ Open
554 papers in training set
Top 13%
0.7%
29
Trials
25 papers in training set
Top 2%
0.5%