Back

On the robustness of medical term representations in locally deployable language models

2026-02-26 health informatics Title + abstract only
View on medRxiv
Show abstract

Structured AbstractO_ST_ABSBackgroundC_ST_ABSHosting large language models (LLMs) on-premises can secure patient data but requires compact architectures to function on standard hardware. The impact of such constraints on the robustness of their representations for medical terminology is important for clinical AI safety but poorly understood. The statistical nature of LLM training inherently limits the representation of terms with low societal prominence or lexical frequency, and high ambiguity. ...

Predicted journal destinations