More Signal vs. More Noise - Comparing Full Text and Abstract as Inputs for Large Language Model-based Classification of Oncology Trial Eligibility Criteria
Weyrich, J.; Dennstaedt, F.; Foerster, R.; Schroeder, C.; Aebersold, D. M.; Zwahlen, D. R.; Windisch, P.
Show abstract
PurposeLarge language models (LLMs) offer significant potential for automating the classification of clinical trials by eligibility criteria. However, a critical question remains regarding the optimal input data: while abstracts provide a condensed, high-density signal, full-text articles contain a much higher volume of information. It remains unclear whether the additional signal found in full texts improves classification performance or if the accompanying noise (in the form of thousands of words irrelevant to the question at hand in a complete manuscript) negatively affects the models reasoning capabilities. MethodsGPT-5 was applied to classify 200 randomized controlled oncology trials from high-impact medical journals, labeling them whether patients with localized and/or metastatic disease were eligible for inclusion. Each trial was classified twice - once using only the abstract and once using the full text - and GPT-5s outputs were compared with the ground-truth labels established by manual annotation. Performance was assessed by calculating and comparing accuracy, precision, recall, and F1 score, and the McNemar test was used to assess the statistical significance of the differences between the two input formats. ResultsFor identifying trials including patients with localized disease, GPT-5 achieved an accuracy of 86% (95% CI: 81% - 91%; F1 = 0.90) when using abstracts and 92% (95% CI: 88% - 95%; F1 = 0.92) when using full texts (p = 0.027). Performance for detecting trials, which include patients with metastatic disease, was comparably high, with accuracies of 99% (95% CI: 99% - 100%; F1 = 1.00) based on abstracts and 98% (95% CI: 97% - 100%; F1 = 0.99) based on full texts. Overall accuracy for assigning combined labels per trial increased from 86% (95% CI: 81% - 91%) using abstracts to 92% (95% CI: 88% - 95%) using full texts (p = 0.027). ConclusionProviding full-text articles to GPT-5 significantly improved the classification of trial eligibility criteria. These findings suggest that, for this task, the benefit of the additional signal contained within the full text outweighed the potential for performance degradation caused by increased noise. Utilizing full-text analysis appears particularly valuable for extracting specific eligibility criteria in oncology that are frequently omitted or not explicitly described within the abstract.
Matching journals
The top 8 journals account for 50% of the predicted probability mass.