Back

Trust in, acceptance of, and responsibility for, artificial intelligence in healthcare: patient and healthcare practitioner considerations

Spencer, E.-J.; Ihaddouchen, I.; Buijsman, S.; Jung, J.; van der Vorst, J.; Grünhagen, D.; Verhoef, K.; Gommers, D.; van Genderen, M. E.; Hilling, D.

2026-03-17 medical ethics
10.64898/2026.03.16.26348461 medRxiv
Show abstract

ObjectivesUsing qualitative methods, this study aimed to provide a comparative overview of the similarities and differences in perspectives towards AI in healthcare in two different stakeholder groups: healthcare practitioners and patients. It also aimed to investigate whether these perspectives may influence the adoption of AI in healthcare. DesignThis study was conducted using semi-structured interviews. Qualitative data from the interviews were analyzed using both deductive and inductive coding, followed by a thematic analysis to identify the prevailing categories for further discussion. SettingThe study was conducted within the Department of Surgical Oncology and Gastrointestinal Surgery of Erasmus Medical Center in Rotterdam, the Netherlands. ParticipantsA total of 30 participants were recruited using purposive sampling based on predefined inclusion characteristics. This included 18 healthcare professionals (subdivided into 10 surgeons and 8 nurses), and 12 patients. The inclusion criterion for healthcare professionals included surgeons specializing in gastro-intestinal surgery, while the inclusion criterion for patients included those patients who had undergone gastro-intestinal surgery within the past 12 months at the time interviews were conducted. Exclusion criteria involved excluding patients with major health complications. Outcome measuresThe studys central objective was to develop a set of thematic domains that characterize how both groups of stakeholders view the integration of AI in healthcare, encompassing their attitudes towards trust, acceptance, and responsibility. Additionally, it aimed to compare perspectives between healthcare professionals and patients in order to identify areas of convergence and divergence. ResultsThe analysis comprised a total of 3 main thematic categories, with 10 subcategories. The main thematic categories which emerged were AI Knowledge, Ethics, and Operational and Clinical Implications. While clinicians largely focused on validation, monitoring, administrative labor, and clinical integration, patients emphasized the importance of human attention, of being heard, and of maintaining trust in their clinician. ConclusionComparing the attitudes and perspectives of both healthcare practitioners and patients revealed the importance of taking into consideration both groups of stakeholders. While both groups tend to raise concerns about similar themes connected to responsibility, it is clear that this concern involves complex dynamics present in the epistemic environment of healthcare. Strengths and limitationsO_LIThis study uniquely compared healthcare practitioners and patients perspectives within a single qualitative design, using similar interview guides to enable direct cross-stakeholder comparison. C_LIO_LIThe study examined perceptions of an AI model that had been designed and validated for clinical use, enhancing the practical relevance of the findings. C_LIO_LIThe relatively small sample size may have limited the diversity of perspectives captured and reduced transferability. C_LIO_LIAs the study was conducted in a single academic hospital in the Netherlands, the findings may not be generalizable to other healthcare settings or national contexts. C_LI

Matching journals

The top 1 journal accounts for 50% of the predicted probability mass.