Back

An Information-Theoretic Perspective on Multi-LLM Uncertainty Estimation

2025-07-10 health informatics Title + abstract only
View on medRxiv
Show abstract

Large language models (LLMs) often behave inconsistently across inputs, indicating uncertainty and motivating the need for its quantification in high-stakes settings. Prior work on calibration and uncertainty quantification often focuses on individual models, overlooking the potential of model diversity. We hypothesize that LLMs make complementary predictions due to differences in training and the Zipfian nature of language, and that aggregating their outputs leads to more reliable uncertainty e...

Predicted journal destinations