Back

Token Alignment for Verifying LLM-Extracted Text

Booeshaghi, A. S.; Streets, A. M.

2026-02-10 bioinformatics
10.64898/2026.02.06.704502 bioRxiv
Show abstract

Large language models excel at text extraction, but they sometimes hallucinate. A simple way to avoid hallucinations is to remove any extracted text that does not appear in the original source. This is easy when the extracted text is contiguous (findable with exact string matching), but much harder when it is discontiguous. Techniques for finding discontiguous phrases depend heavily on how the text is split--i.e., how it is tokenized. In this study, we show that splitting text along subword boundaries, with LLM-specific tokenizers, and aligning extracted text with ordered alignment algorithms, improves alignment by about 50% compared to word-level tokenization. To demonstrate this, we introduce the Berkeley Ordered Alignment of Text (BOAT) dataset, a modification of the Stanford Question Answering Dataset (SQuAD) that includes non-contiguous phrases, and BIO-BOAT a biomedical variant built from 51 bioRxiv preprints. We show that text-alignment methods form a partially ordered set, and that ordered alignment is the most practical choice for verifying LLM-extracted text. We implement this approach in taln, which enumerates ordinal subword alignments.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
Nature Methods
336 papers in training set
Top 0.3%
18.5%
2
Bioinformatics
1061 papers in training set
Top 2%
14.6%
3
Cell Systems
167 papers in training set
Top 1%
8.4%
4
Nature Communications
4913 papers in training set
Top 22%
8.4%
5
Genome Biology
555 papers in training set
Top 2%
4.3%
50% of probability mass above
6
Nature Biotechnology
147 papers in training set
Top 2%
4.3%
7
Genome Research
409 papers in training set
Top 0.9%
3.6%
8
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 23%
3.1%
9
Nature
575 papers in training set
Top 8%
2.6%
10
Nucleic Acids Research
1128 papers in training set
Top 8%
2.4%
11
PLOS ONE
4510 papers in training set
Top 49%
2.1%
12
Bioinformatics Advances
184 papers in training set
Top 2%
1.9%
13
Scientific Reports
3102 papers in training set
Top 54%
1.9%
14
Science
429 papers in training set
Top 13%
1.9%
15
iScience
1063 papers in training set
Top 15%
1.7%
16
eLife
5422 papers in training set
Top 42%
1.7%
17
BMC Bioinformatics
383 papers in training set
Top 5%
1.7%
18
Nature Computational Science
50 papers in training set
Top 0.7%
1.7%
19
Journal of the American Medical Informatics Association
61 papers in training set
Top 2%
1.2%
20
Nature Genetics
240 papers in training set
Top 6%
0.9%
21
PLOS Computational Biology
1633 papers in training set
Top 22%
0.9%
22
Cell
370 papers in training set
Top 17%
0.7%
23
npj Digital Medicine
97 papers in training set
Top 4%
0.6%