Back

Patient friendly summaries of oncology consultations generated by large language models - A pilot study of patient and provider satisfaction

Harchandani, S.; Quinn, R.; Mittal, K.; Martin, A.; Wang, M.-J.; Holstead, R. G.

2025-10-15 oncology
10.1101/2025.10.13.25337951 medRxiv
Show abstract

The expanding capacity of large language models allow for improvements in patient and provider healthcare quality and experience. The medical oncology consultation often includes a discussion of a life-limiting diagnosis and complex treatment protocols. Patient recall from the discussion may be limited, and it is possible that a patient specific written summary could help with understanding, recall, and overall experience. Using a privacy compliant large language model, a prompt was instructed to rewrite an ambulatory medical consultation note as a patient friendly summary, capturing key details from a diagnosis and treatment plan. The summary was provided to both provider and patient for review and a 5-point Likert survey was administered inquiring on the outputs accuracy, clarity, and helpfulness. Patients reported agreement in 100%, 100%, and 87% on each topic respectively. 93% of patients recommended the use of similar summaries in the future. Providers reported agreement in 98%, 91%, and 96% for accuracy, clarity, and empathy respectively. All providers (100%) recommended similar summaries to be used in the future. Some of the summaries retained jargon and results from this study will be used to optimize the prompt for an expanded study. In conclusion, a patient-friendly summary derived from a medical note using a large language model prompt was helpful to patients, and found to be useful for providers Author SummaryAs medical oncology providers, our new patient consultation appointments often require disclosing the diagnosis of a cancer, and a discussion on prognosis, complex treatment plans, the potential for significant side effects, and a number of tests/procedures that are required prior to initiation of the care plan. Patients often benefit from friends or family who take notes during an appointment, however this is not always possible. Technological advances in natural language processing with large language models such as Chat GPT allow for translation of medical language into plain language. In this study, we used a prompt to rewrite a medical note into a summary of the patients oncologic diagnosis and care plan. We then provided this summary to patients and provider to assess their feedback on the value of these summaries. We found that both providers and patients found these summaries to be accurate and understandable. Both groups recommended further development of these summaries. We intend to optimize our summary production for future studies using findings and feedback from this project.

Matching journals

The top 7 journals account for 50% of the predicted probability mass.

1
JMIR Formative Research
32 papers in training set
Top 0.1%
18.4%
2
Artificial Intelligence in Medicine
15 papers in training set
Top 0.1%
12.2%
3
Biology Methods and Protocols
53 papers in training set
Top 0.1%
6.3%
4
PLOS ONE
4510 papers in training set
Top 29%
6.2%
5
Scientific Reports
3102 papers in training set
Top 32%
3.9%
6
Computers in Biology and Medicine
120 papers in training set
Top 1%
2.8%
7
iScience
1063 papers in training set
Top 9%
2.3%
50% of probability mass above
8
BMC Research Notes
29 papers in training set
Top 0.1%
2.1%
9
JCO Clinical Cancer Informatics
18 papers in training set
Top 0.4%
2.1%
10
Journal of Medical Internet Research
85 papers in training set
Top 2%
2.1%
11
BMC Bioinformatics
383 papers in training set
Top 4%
2.1%
12
Frontiers in Oncology
95 papers in training set
Top 2%
1.9%
13
Cancer Medicine
24 papers in training set
Top 0.8%
1.7%
14
BMC Cancer
52 papers in training set
Top 1%
1.6%
15
BMC Medical Informatics and Decision Making
39 papers in training set
Top 2%
1.6%
16
Bioengineering
24 papers in training set
Top 0.6%
1.5%
17
Healthcare
16 papers in training set
Top 1.0%
1.3%
18
PLOS Computational Biology
1633 papers in training set
Top 19%
1.3%
19
PeerJ
261 papers in training set
Top 9%
1.3%
20
Database
51 papers in training set
Top 0.5%
1.3%
21
Journal of Biomedical Informatics
45 papers in training set
Top 1.0%
1.3%
22
Journal of Clinical Medicine
91 papers in training set
Top 5%
1.2%
23
BMC Medical Education
20 papers in training set
Top 0.6%
1.2%
24
JMIR Medical Informatics
17 papers in training set
Top 1%
1.2%
25
Journal of the American Medical Informatics Association
61 papers in training set
Top 2%
1.2%
26
Computer Methods and Programs in Biomedicine
27 papers in training set
Top 0.6%
1.1%
27
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 2%
1.1%
28
PLOS Digital Health
91 papers in training set
Top 2%
1.1%
29
Frontiers in Bioinformatics
45 papers in training set
Top 0.6%
0.9%
30
Journal of Translational Medicine
46 papers in training set
Top 2%
0.9%