Reconsidering LLM Uncertainty Estimation Methods in the Wild

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper systematically evaluates the robustness of 19 uncertainty estimation (UE) methods for large language models (LLMs) under realistic deployment challenges: threshold sensitivity, query perturbations (typos, adversarial prompts, contextual history), long-text generation adaptation, and multi-UE score fusion. Methodologically, it introduces a unified evaluation framework grounded in AUROC and prompt robustness ratio (PRR), incorporating typo injection, adversarial prompt construction, context perturbation, segment-wise scoring, and weighted/voting ensembles. Key contributions include the first empirical demonstration that UE methods exhibit high sensitivity to decision thresholds under distributional shift, with adversarial prompts constituting the primary vulnerability. Results show substantial performance degradation across most UE methods under adversarial prompting; multi-score ensemble strategies yield an average AUROC improvement of 8.2%; and long-generation adaptation techniques are effective yet remain suboptimal. The study provides actionable insights for deploying reliable LLM uncertainty quantification in production environments.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) Uncertainty Estimation (UE) methods have become a crucial tool for detecting hallucinations in recent years. While numerous UE methods have been proposed, most existing studies evaluate them in isolated short-form QA settings using threshold-independent metrics such as AUROC or PRR. However, real-world deployment of UE methods introduces several challenges. In this work, we systematically examine four key aspects of deploying UE methods in practical settings. Specifically, we assess (1) the sensitivity of UE methods to decision threshold selection, (2) their robustness to query transformations such as typos, adversarial prompts, and prior chat history, (3) their applicability to long-form generation, and (4) strategies for handling multiple UE scores for a single query. Our evaluations on 19 UE methods reveal that most of them are highly sensitive to threshold selection when there is a distribution shift in the calibration dataset. While these methods generally exhibit robustness against previous chat history and typos, they are significantly vulnerable to adversarial prompts. Additionally, while existing UE methods can be adapted for long-form generation through various strategies, there remains considerable room for improvement. Lastly, ensembling multiple UE scores at test time provides a notable performance boost, which highlights its potential as a practical improvement strategy. Code is available at: https://github.com/duygunuryldz/uncertainty_in_the_wild.
Problem

Research questions and friction points this paper is trying to address.

Assessing sensitivity of UE methods to threshold selection under distribution shifts
Evaluating robustness of UE methods against adversarial prompts and query transformations
Exploring strategies for adapting UE methods to long-form generation and multiple scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Assessing UE methods' threshold sensitivity
Evaluating robustness to query transformations
Adapting UE methods for long-form generation
Y
Y. Bakman
University of Southern California
D
D. Yaldiz
University of Southern California
S
Sungmin Kang
University of Southern California
T
Tuo Zhang
University of Southern California
Baturalp Buyukates
Baturalp Buyukates
Assistant Professor, University of Birmingham
Trustworthy Machine LearningFederated LearningAge of InformationNetworks
S
S. Avestimehr
University of Southern California
Sai Praneeth Karimireddy
Sai Praneeth Karimireddy
USC
Machine LearningOptimizationPrivacyFederated learningData economy