Exploration of ChatGPT application in diabetes education: a multi-dataset, multi-reviewer study
Ying, Z.; Fan, Y.; Lu, J.; Wang, P.; Zou, L.; Tang, Q.; Chen, Y.; Li, X.; Chen, Y.
Show abstract
AimsLarge language models (LLMs), exemplified by ChatGPT have recently emerged as potential solutions to challenges of traditional diabetes education. This study aimed to explore the feasibility and utility of ChatGPT application in diabetes education. MethodsWe conducted a multi-dataset, multi-reviewer study. In the retrospective dataset evaluation, 85 questions covering seven aspects of diabetes education were collected. Three physicians evaluate the ChatGPT responses for reproducibility, relevance, correctness, helpfulness, and safety, while twelve laypersons evaluated the readability, helpfulness, and trustworthiness of the responses. In the real-world dataset evaluation, three individuals with type 2 diabetes (a newly diagnosed patient, a patient with diabetes for 20 years and on oral anti-diabetic medications, and a patient with diabetes for 40 years and on insulin therapy) posed their questions. The helpfulness and trustworthiness of responses from ChatGPT and physicians were assessed. ResultsIn the retrospective dataset evaluation, physicians rated ChatGPT responses for relevance (5.98/6.00), correctness (5.69/6.00), helpfulness (5.75/6.00), and safety (5.95/6.00), while the ratings by laypersons for readability, helpfulness, and trustworthiness were 5.21/6.00, 5.02/6.00, and 4.99/6.00, respectively. In the real-world dataset evaluation, ChatGPT responses received lower ratings compared to physicians responses (helpfulness: 4.18 vs. 4.91, P <0.001; trustworthiness: 4.80 vs. 5.20, P = 0.042). However, when carefully crafted prompts were utilized, the ratings of ChatGPT responses were comparable to those of physicians. ConclusionsThe results show that the application of ChatGPT in addressing typical diabetes education questions is feasible, and carefully crafted prompts are crucial for satisfactory ChatGPT performance in real-world personalized diabetes education. Whats new?O_LIThis is the first study covering evaluations by doctors, laypersons and patients to explore ChatGPT application in diabetes education. This multi-reviewer evaluation approach provided a multidimensional understanding of ChatGPTs capabilities and laid the foundation for subsequent clinical evaluations. C_LIO_LIThis study suggested that the application of ChatGPT in addressing typical diabetes education questions is feasible, and carefully crafted prompts are crucial for satisfactory ChatGPT performance in real-world personalized diabetes education. C_LIO_LIResults of layperson evaluation revealed that human factors could result in disparities of evaluations. Further concern of trust and ethical issues in AI development are necessary. C_LI
Matching journals
The top 7 journals account for 50% of the predicted probability mass.