🤖 AI Summary
LLM evaluation suffers from score instability and model ranking fluctuations due to stochastic factors—such as few-shot example selection, ordering, and content—rendering fixed-setting evaluations prone to unfair comparisons.
Method: We propose an instance-level randomized evaluation framework that dynamically samples stochastic factors per input instance (e.g., example order and composition) and aggregates results across multiple independent runs via averaging to reduce variance.
Contribution/Results: We provide the first theoretical analysis of variance sources induced by such randomness in LLM evaluation and prove our method reduces evaluation variance while cutting computational cost by over 50%. Experiments demonstrate that the new paradigm significantly improves score stability and ranking robustness, outperforming conventional fixed-setting approaches under substantially lower computational budgets, thereby mitigating misjudgment risks stemming from stochasticity.
📝 Abstract
Evaluations of large language models (LLMs) suffer from instability, where small changes of random factors such as few-shot examples can lead to drastic fluctuations of scores and even model rankings. Moreover, different LLMs can have different preferences for a certain setting of random factors. As a result, using a fixed setting of random factors, which is often adopted as the paradigm of current evaluations, can lead to potential unfair comparisons between LLMs. To mitigate the volatility of evaluations, we first theoretically analyze the sources of variance induced by changes in random factors. Targeting these specific sources, we then propose the instance-level randomization (ILR) method to reduce variance and enhance fairness in model comparisons. Instead of using a fixed setting across the whole benchmark in a single experiment, we randomize all factors that affect evaluation scores for every single instance, run multiple experiments and report the averaged score. Theoretical analyses and empirical results demonstrate that ILR can reduce the variance and unfair comparisons caused by random factors, as well as achieve similar robustness level with less than half computational cost compared with previous methods.