Back

Medical Hallucination in Foundation Models and Their Impact on Healthcare

2025-03-03 health systems and quality improvement Title + abstract only
View on medRxiv
Show abstract

Hallucinations in foundation models arise from autoregressive training objectives that prioritize token-likelihood optimization over epistemic accuracy, fostering overconfidence and poorly calibrated uncertainty. In clinical set- tings, where profound knowledge asymmetry exists between AI systems and end-users, undetected misinformation such as fabricated medications, contraindicated drug recommendations, or false imaging interpretations poses direct patient safety risks. We define medical hallu...

Predicted journal destinations