Back

Clinicians' Rationale for Editing Ambient AI-Drafted Clinical Notes: Persistent Challenges and Implications for Improvement

Guo, Y.; Hu, D.; Yang, Z.; Chow, E.; Tam, S.; Perret, D.; Pandita, D.; Zheng, K.

2026-02-22 health informatics
10.64898/2026.02.20.26346729 medRxiv
Show abstract

Structured AbstractO_ST_ABSObjectiveC_ST_ABSThe use of ambient AI documentation tools is rapidly growing in US hospitals and clinics. Such tools generate the first draft of clinical notes from scribed patient-provider conversations, which clinicians can then review and edit before signing into electronic health records (EHR). Understanding how and why clinicians make modifications to AI-generated drafts is critical to improving AI design and clinical efficiency, yet it has been under-studied. This study aims to address this gap. Materials and MethodsWe conducted semistructured interviews with 30 clinicians from the University of California, Irvine Health who used a commercial ambient AI tool in routine outpatient care. We invited them to describe how and why they edited AI drafts based on both their personal experience and review of some real-world examples identified from our previous studies. ResultsModifications to AI drafts were primarily made to improve clinical accuracy and specialty-specific precision, reduce medico-legal and liability risk, and meet billing, coding, and documentation standards. Such editing was necessary due to reasons such as transcription errors, speaker attribution mistakes, overconfident statements without evidence, missing key clinical details, and AIs lack of information about the patient context. Conclusion and DiscussionImproving ambient AI documentation will require coordinated effort from vendors, institutions, and clinicians. Key targets include core model reliability (e.g., transcription accuracy), specialty-and encounter-level customization, clinician-level personalization, more effective EHR integration, and institutional support (e.g., training, governance, and standardized review guidance), complemented by clinicians adaptive communication strategies that strengthen human-AI collaboration.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Journal of the American Medical Informatics Association
61 papers in training set
Top 0.1%
22.1%
2
JAMIA Open
37 papers in training set
Top 0.1%
14.1%
3
Journal of Medical Internet Research
85 papers in training set
Top 0.5%
9.0%
4
BMJ Health & Care Informatics
13 papers in training set
Top 0.1%
6.3%
50% of probability mass above
5
DIGITAL HEALTH
12 papers in training set
Top 0.1%
4.8%
6
npj Digital Medicine
97 papers in training set
Top 0.9%
4.8%
7
Journal of Biomedical Informatics
45 papers in training set
Top 0.3%
4.2%
8
BMC Medical Informatics and Decision Making
39 papers in training set
Top 0.7%
3.9%
9
JMIR Formative Research
32 papers in training set
Top 0.3%
3.6%
10
Frontiers in Digital Health
20 papers in training set
Top 0.3%
3.5%
11
PLOS Digital Health
91 papers in training set
Top 0.8%
3.5%
12
JMIR Medical Informatics
17 papers in training set
Top 0.5%
2.4%
13
Journal of General Internal Medicine
20 papers in training set
Top 0.4%
2.0%
14
PLOS ONE
4510 papers in training set
Top 51%
1.9%
15
BMJ Open Quality
15 papers in training set
Top 0.6%
1.2%
16
BMJ Open
554 papers in training set
Top 11%
1.2%
17
Journal of Clinical and Translational Science
11 papers in training set
Top 0.3%
1.1%
18
JMIR Public Health and Surveillance
45 papers in training set
Top 3%
0.9%
19
JMIR Research Protocols
18 papers in training set
Top 1%
0.8%
20
Healthcare
16 papers in training set
Top 2%
0.7%
21
Scientific Reports
3102 papers in training set
Top 77%
0.7%
22
International Journal of Medical Informatics
25 papers in training set
Top 2%
0.6%