Back

Assessing the Impact of Pretraining Domain Relevance on Large Language Models Across Various Pathology Reporting Tasks

Lu, Y.; Srinivasan, G.; Preum, S.; Pettus, J.; Davis, M.; Greenburg, J.; Vaickus, L.; Levy, J.

2023-09-11 pathology
10.1101/2023.09.10.23295318 medRxiv
Show abstract

Deep learning (DL) algorithms continue to develop at a rapid pace, providing researchers access to a set of tools capable of solving a wide array of biomedical challenges. While this progress is promising, it also leads to confusion regarding task-specific model choices, where deeper investigation is necessary to determine the optimal model configuration. Natural language processing (NLP) has the unique ability to accurately and efficiently capture a patients narrative, which can improve the operational efficiency of modern pathology laboratories through advanced computational solutions that can facilitate rapid access to and reporting of histological and molecular findings. In this study, we use pathology reports from a large academic medical system to assess the generalizability and potential real-world applicability of various deep learning-based NLP models on reports with highly specialized vocabulary and complex reporting structures. The performance of each NLP model examined was compared across four distinct tasks: 1) current procedural terminology (CPT) code classification, 2) pathologist classification, 3) report sign-out time regression, and 4) report text generation, under the hypothesis that models initialized on domain-relevant medical text would perform better than models not attuned to this prior knowledge. Our study highlights that the performance of deep learning-based NLP models can vary meaningfully across pathology-related tasks. Models pretrained on medical data outperform other models where medical domain knowledge is crucial, e.g., current procedural terminology (CPT) code classification. However, where interpretation is more subjective (i.e., teasing apart pathologist-specific lexicon and variable sign-out times), models with medical pretraining do not consistently outperform the other approaches. Instead, fine-tuning models pretrained on general or unrelated text sources achieved comparable or better results. Overall, our findings underscore the importance of considering the nature of the task at hand when selecting a pretraining strategy for NLP models in pathology. The optimal approach may vary depending on the specific requirements and nuances of the task, and related text sources can offer valuable insights and improve performance in certain cases, contradicting established notions about domain adaptation. This research contributes to our understanding of pretraining strategies for large language models and further informs the development and deployment of these models in pathology-related applications.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Biology Methods and Protocols
53 papers in training set
Top 0.1%
18.6%
2
Journal of Pathology Informatics
13 papers in training set
Top 0.1%
18.6%
3
Modern Pathology
21 papers in training set
Top 0.1%
12.5%
4
Scientific Reports
3102 papers in training set
Top 18%
6.4%
50% of probability mass above
5
Computers in Biology and Medicine
120 papers in training set
Top 0.6%
4.3%
6
Journal of Medical Imaging
11 papers in training set
Top 0.1%
2.6%
7
PLOS ONE
4510 papers in training set
Top 48%
2.1%
8
GigaScience
172 papers in training set
Top 0.9%
2.1%
9
Computational and Structural Biotechnology Journal
216 papers in training set
Top 4%
1.7%
10
npj Digital Medicine
97 papers in training set
Top 2%
1.7%
11
Medical Image Analysis
33 papers in training set
Top 0.7%
1.5%
12
Journal of Biomedical Informatics
45 papers in training set
Top 0.9%
1.5%
13
iScience
1063 papers in training set
Top 19%
1.3%
14
BMC Medical Informatics and Decision Making
39 papers in training set
Top 2%
1.3%
15
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 1%
1.2%
16
PLOS Computational Biology
1633 papers in training set
Top 20%
1.2%
17
The Lancet Digital Health
25 papers in training set
Top 0.7%
1.0%
18
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 40%
1.0%
19
Journal of the American Medical Informatics Association
61 papers in training set
Top 2%
0.9%
20
Database
51 papers in training set
Top 0.8%
0.8%
21
npj Precision Oncology
48 papers in training set
Top 1%
0.8%
22
JAMIA Open
37 papers in training set
Top 1%
0.7%
23
Heliyon
146 papers in training set
Top 7%
0.7%
24
Biological Imaging
15 papers in training set
Top 0.3%
0.7%
25
Journal of Medical Internet Research
85 papers in training set
Top 5%
0.6%
26
Frontiers in Genetics
197 papers in training set
Top 11%
0.6%