Back

Evaluation of symptom checker formats to support health literacy and trust in AI: Results from an online randomised-controlled trial

Ayre, J.; Gallagher, K.; Smith, J.; Hudson, C.; Scott, A.; Woods, A.; Ng, C.; Wickramasinghe, Y.; Ma, I.; Nadesan, W.; Kapoor, G.; Edlund, G.; Butters, L.; Vu, T.; McCaffery, K. J.

2026-03-12 public and global health
10.64898/2026.03.11.26347036 medRxiv
Show abstract

ObjectivesEvaluate the impact of online symptom checker formats on symptom management knowledge, symptom checker trust and acceptability, and behavioural intentions. DesignTwo 5-arm parallel-group online randomised controlled trials. SettingOnline survey. Participants2110 Australian adults recruited through online research panel in June 2025. Almost half identified as man/male (49%) and woman/female (51%), median age 49 years (IQR=28). Participants viewed a hypothetical health scenario (fever and vomiting) followed by a screenshot of an online symptom checker from national health service provider, healthdirect. InterventionsParticipants were randomised to symptom acuity level (low: self-care at home; or moderate: see a General Practitioner (GP) in 24 hours) and one of five symptom checker formats. The standard format showed the existing healthdirect symptom checker advice. The remaining formats were AI-enhanced versions. These included an AI-enhanced version with e.g. more tailored advice, rationale for acuity level, and AI disclosure statement. The other AI-enhanced formats had additional features: numbered steps, multimedia, and more detailed information about the use of AI. Main outcome measuresPrimary outcomes were intentions to follow the symptom checkers self-care advice and intentions to see a GP in 24 hours. Secondary outcomes were trust in advice, knowledge of symptom management, and acceptability of the tool. All outcomes were assessed immediately post-intervention; knowledge was also assessed after 2 weeks. ResultsWhen advised to self-care at home, the AI-enhanced groups reported lower intentions to see a GP in 24 hours (median 3.00 out of 5), compared to the standard (original) tool (median 4.00; adjusted p = 0.003). There were no other significant effects on intentions. Immediately following the intervention, participants who viewed an AI-enhanced format reported greater knowledge about how to manage current and changing symptoms, across both acuity levels (adjusted ps <0.001). Knowledge gains were not sustained at 2 week follow-up. There were no significant effects on trust or acceptability. ConclusionsParticipants who viewed the more tailored information in the AI-enhanced formats demonstrated stronger knowledge for managing symptoms than those who viewed the standard format. There was also some evidence that an AI-enhanced format may be more effective at reducing use of primary care for symptoms that can be managed at home. Trust and acceptability were high across formats, and the explicit use of AI did not impact significantly on these outcomes. Future research should investigate these formats using interactive prototypes across a wider variety of health contexts. RegistrationACTRN ACTRN12625000474459p Key messagesO_LIWhat is already known on this topic: Although online, evidence-based symptom checkers have been widely available from reputable health organisations for over a decade, they often face poor uptake and may not adequately meet health literacy needs of diverse users. C_LIO_LIWhat this study adds: Symptom checker features that could be implemented with AI, such as tailored information and a clear rationale for triage advice, may help support appropriate symptom management. Statements about the tools use of AI did not appear to impact trust or acceptability of the symptom checker tool. C_LIO_LIHow this study might affect research, practice or policy: Findings from this study suggest that using AI to enhance symptom checker advice may not impact negatively on trust and acceptability of the tool, and may improve appropriate symptom management. Further research is needed to investigate AI-enhanced symptom checker formats using interactive prototypes across a wider variety of health contexts. C_LI

Matching journals

The top 6 journals account for 50% of the predicted probability mass.

1
BMJ Open
554 papers in training set
Top 1%
14.6%
2
Journal of Medical Internet Research
85 papers in training set
Top 0.3%
12.6%
3
British Journal of General Practice
22 papers in training set
Top 0.1%
8.3%
4
Health Expectations
12 papers in training set
Top 0.1%
6.8%
5
PLOS ONE
4510 papers in training set
Top 32%
4.8%
6
BJGP Open
12 papers in training set
Top 0.1%
3.9%
50% of probability mass above
7
npj Digital Medicine
97 papers in training set
Top 1%
3.6%
8
Palliative Medicine
10 papers in training set
Top 0.1%
3.0%
9
BMC Health Services Research
42 papers in training set
Top 0.8%
2.9%
10
BMC Public Health
147 papers in training set
Top 2%
2.4%
11
BMJ Open Quality
15 papers in training set
Top 0.4%
2.1%
12
JMIR Formative Research
32 papers in training set
Top 0.6%
2.1%
13
PLOS Digital Health
91 papers in training set
Top 1%
1.9%
14
Trials
25 papers in training set
Top 0.9%
1.7%
15
Journal of General Internal Medicine
20 papers in training set
Top 0.5%
1.7%
16
Pilot and Feasibility Studies
12 papers in training set
Top 0.3%
1.6%
17
Journal of Public Health
23 papers in training set
Top 0.5%
1.5%
18
BMC Medicine
163 papers in training set
Top 4%
1.5%
19
Preventive Medicine
11 papers in training set
Top 0.2%
0.9%
20
JMIR Research Protocols
18 papers in training set
Top 2%
0.7%
21
BMC Infectious Diseases
118 papers in training set
Top 6%
0.6%
22
Frontiers in Public Health
140 papers in training set
Top 9%
0.6%
23
Public Health
34 papers in training set
Top 2%
0.6%
24
The Lancet Public Health
20 papers in training set
Top 0.8%
0.6%