Priorities for AI Education: Clinicians' Perspectives
Jeffrey, M.; Auyoung, E.; Pak, D.
Show abstract
ObjectiveEducating clinicians about Artificial Intelligence (AI) is an urgent need(1) as the UK General Medical Council (GMC) places liability with practitioners(2) and the EU AI Act with employers to provide appropriate training(3), but also because AI, like any tool, requires training to use safely. NHSE Capability Framework provides guidance(4), but frontline clinicians perspectives are unknown so we sought to identify their priorities. Methods and AnalysisUsing iterative interviews with residents, educators and experts we synthesised 10 contextualised AI-related problem statements. We surveyed residents and consultant-educators in the East of England, who rated their confidence and importance. Participants also ranked their preferred learning modality. ResultsWe received 299 responses. Clinicians priorities, defined by high importance (I) and low confidence (C), were: understanding liability implications (I: 40%; C: 1.82/5), determining appropriate levels of confidence in AI algorithms (I: 36.5%; C: 1.98/5), and mitigating security and privacy risks (I: 34%; C: 1.68). Confidence was low (mean 20, range 10-50), with no significant difference between educators and residents. Residents preferred integration of training into regional teaching, while consultant-educators favoured webinars. ConclusionOur findings show that clinicians prioritise practical concerns, such as liability and determining confidence in algorithmic outputs. In contrast, critical appraisal and explaining AI to patients were deprioritised, despite their relevance to clinical safety. This study enhances the NHSE Capability Framework by contextualizing AI-related capabilities for clinicians as users and identifying priorities with which to develop scalable training. Key MessagesO_ST_ABSWhat is already know on this topicC_ST_ABSWhile clinicians face legal accountability for their use of AI in healthcare(2,3,5), there remains no standardised educational pathway to support them in acquiring the necessary skills. Although expert-informed capability frameworks exist(6), they are necessarily broad and lack operational clarity for day-to-day clinical roles. What this study addsThis study translates 31 AI-related capabilities from the NHSE DART-Ed Capability Framework(6) into 10 concise AI learning needs for clinicians of the user archetype through iterative interviews with residents, educators and AI experts. A regional survey with 299 responses from residents and educators highlights practical concerns such as liability and determining appropriate confidence in AI algorithms as learners priorities, whilst critical appraisal and explaining AI to patients were deprioritised despite their relevance to clinical safety. How this study might affect research, practice or policyThe educational priorities of clinicians as users of AI identified in this study provides engaging, curriculum-ready content mapped to the user archetype of the DART-Ed framework, which can be adapted to role and task-specific educational activities.
Matching journals
The top 6 journals account for 50% of the predicted probability mass.