Back

Drug or Pokemon? An analysis of the ability of large language models to discern fabricated medications

2026-01-13 health informatics Title + abstract only
View on medRxiv
Show abstract

BackgroundThe use of large language models (LLMs) is increasing in the medical field; however, LLMs are often subject to "confabulations." Notably, LLMs have vulnerability to adversarial attacks, or fabricated details within prompts, which is concerning given both health misinformation and inadvertent errors in the medical record. This purpose of this study was to determine the effect of adversarial attacks by embedding one fabricated medication into a list of existing medicines. MethodsA total...

Predicted journal destinations