Towards Reproducible LLM Evaluation: Quantifying Uncertainty in LLM Benchmark Scores

📅 2024-10-04
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) benchmark evaluations suffer from irreproducibility due to inherent stochasticity, even under deterministic settings (e.g., zero temperature and fixed random seed). Method: This work introduces the first systematic uncertainty quantification framework for LLM evaluation, proposing a lightweight, no-additional-inference confidence estimation method that integrates bootstrap resampling with analytical variance estimation. The approach is validated across specialized benchmarks—including directional reasoning—via multiple evaluation runs. Contribution/Results: Experiments reveal non-negligible benchmark score standard deviations of 2.3%–5.7% even under fully deterministic conditions. With only ≤3 repeated runs, the method achieves 95% confidence intervals narrower than 1.2%, substantially enhancing evaluation reproducibility. Designed for both statistical rigor and engineering practicality, this work establishes a scalable, uncertainty-aware modeling paradigm for reliable LLM assessment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are stochastic, and not all models give deterministic answers, even when setting temperature to zero with a fixed random seed. However, few benchmark studies attempt to quantify uncertainty, partly due to the time and cost of repeated experiments. We use benchmarks designed for testing LLMs' capacity to reason about cardinal directions to explore the impact of experimental repeats on mean score and prediction interval. We suggest a simple method for cost-effectively quantifying the uncertainty of a benchmark score and make recommendations concerning reproducible LLM evaluation.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in LLM benchmark scores
Explore impact of experimental repeats on scores
Propose cost-effective method for reproducible evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantify uncertainty in LLM benchmark scores
Cost-effective method for reproducible evaluation
Explore impact of experimental repeats
🔎 Similar Papers
No similar papers found.
R
Robert E Blackwell
The Alan Turing Institute
J
Jon Barry
The Centre for Environment Fisheries and Aquaculture Science
A
Anthony G. Cohn
School of Computer Science, University of Leeds