Back

Medical Students' Use of Large Language Models: A National Survey

Barr, A. A.; Rozman, R. C.; Liu, K.; Pham, M.; Klarenbach, Z.; Chinna-Meyyappan, A.; Hassan, A. Y.; Zarychta, M.; El Ferri, O.; Al-Khaz'Aly, A.; Datt, P.; Herik, A. I.; Sadek, K.; Paget, M.; Holodinsky, J. K.

2026-01-29 medical education
10.64898/2026.01.26.26344898 medRxiv
Show abstract

BackgroundLarge language models (LLMs) are increasingly embedded in medical education and clinical care settings, yet limited empirical data describe medical students in Canadas use and perceptions of these tools. We aimed to characterize student engagement including LLMs used, frequency, purposes, trust, accuracy, perceived impacts, and attitudes toward educational and clinical integration. MethodsWe conducted a national survey of medical students in Canada distributed between November and December 2025. We summarized responses using descriptive statistics and compared results between students in preclerkship versus clerkship using Fishers exact test. ResultsAmong 286 respondents from 10 medical schools, 96.50% reported using at least one LLM. The most commonly used LLMs were ChatGPT (93.36%) and OpenEvidence (57.69%). Daily/weekly use was most frequent for coursework assistance (60.22%) and clinical questions (57.14%). Most respondents reported positive impacts on efficiency (81.62%), learning (77.01%), and academic performance (59.49%). Students commonly reported encountering inaccurate information (90.18%). Formal instruction on LLM use was uncommon (10.95%), though 67.67% of students agreed medical schools should integrate formal instruction on LLMs. Only 21.43% of respondents felt adequately educated on data privacy regulations applicable to these tools. ConclusionsLLM use among surveyed medical students in Canada was nearly universal and perceived favourably. However, students reported exposure to inaccurate outputs and substantial gaps in formal training and privacy literacy. These findings support the development of structured curricular guidance on appropriate application of these tools, including information verification practices and ethical, privacy-aware engagement.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
PLOS ONE
4510 papers in training set
Top 13%
14.4%
2
BMC Medical Education
20 papers in training set
Top 0.1%
12.4%
3
International Journal of Medical Informatics
25 papers in training set
Top 0.1%
10.2%
4
Journal of Medical Internet Research
85 papers in training set
Top 0.4%
10.2%
5
BMJ Open
554 papers in training set
Top 4%
4.9%
50% of probability mass above
6
Scientific Reports
3102 papers in training set
Top 27%
4.3%
7
PLOS Digital Health
91 papers in training set
Top 0.6%
4.2%
8
BMC Public Health
147 papers in training set
Top 2%
3.6%
9
Frontiers in Public Health
140 papers in training set
Top 2%
3.6%
10
Journal of General Internal Medicine
20 papers in training set
Top 0.2%
3.3%
11
Journal of Clinical and Translational Science
11 papers in training set
Top 0.1%
3.1%
12
Open Forum Infectious Diseases
134 papers in training set
Top 0.8%
2.5%
13
Healthcare
16 papers in training set
Top 0.4%
1.9%
14
BMC Medical Informatics and Decision Making
39 papers in training set
Top 1%
1.8%
15
The Lancet Digital Health
25 papers in training set
Top 0.4%
1.7%
16
Cancer Medicine
24 papers in training set
Top 0.8%
1.7%
17
JAMA Network Open
127 papers in training set
Top 4%
0.9%
18
npj Digital Medicine
97 papers in training set
Top 3%
0.8%
19
F1000Research
79 papers in training set
Top 4%
0.8%
20
Frontiers in Medicine
113 papers in training set
Top 6%
0.8%
21
Pediatrics
10 papers in training set
Top 0.3%
0.7%
22
Cureus
67 papers in training set
Top 6%
0.5%
23
BMJ Health & Care Informatics
13 papers in training set
Top 1%
0.5%