Back

Utilizing AI-Generated Plain Language Summaries to Enhance Interdisciplinary Understanding of Ophthalmology Notes: A Randomized Trial

Tailor, P. D.; D'Souza, H. S.; Castillejo Becerra, C.; Dahl, H. M.; Patel, N. R.; Kaplan, T. M.; Kohli, D.; Bothun, E. D.; Mohney, B. G.; Tooley, A. A.; Baratz, K. H.; Iezzi, R.; Barkmeier, A. J.; Bakri, S. J.; Roddy, G. W.; Hodge, D.; Sit, A. J.; Starr, M. R.; Chen, J. J.

2024-09-13 ophthalmology
10.1101/2024.09.12.24313551 medRxiv
Show abstract

BackgroundSpecialized terminology employed by ophthalmologists creates a comprehension barrier for non-ophthalmology providers, compromising interdisciplinary communication and patient care. Current solutions such as manual note simplification are impractical or inadequate. Large language models (LLMs) present a potential low-burden approach to translating ophthalmology documentation into accessible language. MethodsThis prospective, randomized trial evaluated the addition of LLM-generated plain language summaries (PLSs) to standard ophthalmology notes (SONs). Participants included non-ophthalmology providers and ophthalmologists. The study assessed: (1) non-ophthalmology providers comprehension and satisfaction with either the SON (control) or SON+PLS (intervention), (2) ophthalmologists evaluation of PLS accuracy, safety, and time burden, and (3) objective semantic and linguistic quality of PLSs. Results85% of non-ophthalmology providers (n=362, 33% response rate) preferred the PLS to SON. Non-ophthalmology providers reported enhanced diagnostic understanding (p=0.012), increased note detail satisfaction (p<0.001), and improved explanation clarity (p<0.001) for notes containing a PLS. The addition of a PLS narrowed comprehension gaps between providers who were comfortable and uncomfortable with ophthalmology terminology at baseline (intergroup difference p<0.001 to p>0.05). PLS semantic analysis demonstrated high meaning preservation (BERTScore mean F1 score: 0.85) with greater readability (Flesch Reading Ease: 51.8 vs. 43.6, Flesch-Kincaid Grade Level: 10.7 vs. 11.9). Ophthalmologists (n=489, 84% response rate) reported high PLS accuracy (90% "a great deal") with minimal review time burden (94.9% [&le;] 1 minute). PLS error rate on initial ophthalmologist review and editing was 26%, and 15% on independent ophthalmologist over-read of edited PLSs. 84.9% of identified errors were deemed low risk for patient harm and 0% had a risk of severe harm/death. ConclusionsLLM-generated plain language summaries enhance accessibility and utility of ophthalmology notes for non-ophthalmology providers while maintaining high semantic fidelity and improving readability. PLS error rates underscore the need for careful implementation and ongoing safety monitoring in clinical practice.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
British Journal of Ophthalmology
14 papers in training set
Top 0.1%
19.1%
2
PLOS ONE
4510 papers in training set
Top 11%
15.1%
3
Eye
11 papers in training set
Top 0.1%
10.7%
4
Journal of Medical Internet Research
85 papers in training set
Top 0.3%
10.4%
50% of probability mass above
5
Ophthalmology Science
20 papers in training set
Top 0.1%
5.0%
6
PLOS Digital Health
91 papers in training set
Top 0.4%
5.0%
7
F1000Research
79 papers in training set
Top 0.5%
3.7%
8
Annals of Translational Medicine
17 papers in training set
Top 0.4%
2.8%
9
Journal of General Internal Medicine
20 papers in training set
Top 0.3%
2.8%
10
Genetics in Medicine
69 papers in training set
Top 0.5%
2.1%
11
npj Digital Medicine
97 papers in training set
Top 2%
1.9%
12
Scientific Reports
3102 papers in training set
Top 52%
1.9%
13
Translational Vision Science & Technology
35 papers in training set
Top 0.4%
1.8%
14
Cancer Medicine
24 papers in training set
Top 1%
0.9%
15
BMJ Open
554 papers in training set
Top 12%
0.8%
16
Cureus
67 papers in training set
Top 4%
0.8%
17
Orphanet Journal of Rare Diseases
18 papers in training set
Top 0.6%
0.8%
18
International Journal of Environmental Research and Public Health
124 papers in training set
Top 7%
0.7%
19
British Journal of Cancer
42 papers in training set
Top 2%
0.7%
20
Vaccines
196 papers in training set
Top 3%
0.7%
21
Computers in Biology and Medicine
120 papers in training set
Top 5%
0.7%
22
JAMA Network Open
127 papers in training set
Top 5%
0.7%
23
American Journal of Medical Genetics Part A
17 papers in training set
Top 0.4%
0.5%
24
Journal of the American Medical Informatics Association
61 papers in training set
Top 2%
0.5%