🤖 AI Summary
Evaluating response uncertainty in closed-source large language models (LLMs) remains challenging due to their black-box nature and lack of internal access.
Method: This paper introduces UBENCH—the first multi-choice benchmark (3,978 items) specifically designed for black-box reliability assessment, covering knowledge, language, comprehension, and reasoning tasks. It proposes the first systematic approach to quantify uncertainty from a single model sample—requiring no model internals, fine-tuning, or high computational overhead. The method integrates confidence calibration, answer-order control, chain-of-thought (CoT), and role-based prompting to analyze how prompt engineering modulates reliability.
Results: Experiments across 15 state-of-the-art LLMs show GLM-4 and GPT-4 achieve the highest reliability. UBENCH attains state-of-the-art performance while reducing computational cost significantly compared to conventional multi-sample uncertainty estimation methods.
📝 Abstract
The rapid development of large language models (LLMs) has shown promising practical results. However, their low interpretability often leads to errors in unforeseen circumstances, limiting their utility. Many works have focused on creating comprehensive evaluation systems, but previous benchmarks have primarily assessed problem-solving abilities while neglecting the response's uncertainty, which may result in unreliability. Recent methods for measuring LLM reliability are resource-intensive and unable to test black-box models. To address this, we propose UBENCH, a comprehensive benchmark for evaluating LLM reliability. UBENCH includes 3,978 multiple-choice questions covering knowledge, language, understanding, and reasoning abilities. Experimental results show that UBENCH has achieved state-of-the-art performance, while its single-sampling method significantly saves computational resources compared to baseline methods that require multiple samplings. Additionally, based on UBENCH, we evaluate the reliability of 15 popular LLMs, finding GLM4 to be the most outstanding, closely followed by GPT-4. We also explore the impact of Chain-of-Thought prompts, role-playing prompts, option order, and temperature on LLM reliability, analyzing the varying effects on different LLMs.