🤖 AI Summary
Large language models (LLMs) exhibit answer instability in multiple-choice question (MCQ) evaluation—minor prompt perturbations cause substantial prediction shifts, undermining metric reliability.
Method: We propose a novel evaluation protocol that quantitatively links evaluation metrics to answer volatility. Central to our approach is “Worst-case Accuracy” (WCA), a stability-oriented metric defined as the minimum accuracy across a diverse set of semantically equivalent prompt variants. We complement WCA with multi-metric comparison, systematic prompt perturbation testing, volatility modeling, and statistical correlation analysis.
Contribution/Results: Experiments reveal that conventional metrics (e.g., standard accuracy) are highly sensitive to prompt variations, whereas WCA demonstrates superior consistency and robustness under diverse perturbations. It achieves significantly higher rank correlation with human judgments of model reliability and exhibits stronger statistical stability across model families and question domains. This work establishes the first principled, quantitative framework for assessing MCQ evaluation robustness, offering a new paradigm that jointly optimizes validity and resilience.
📝 Abstract
Using multiple-choice questions (MCQs) has become a standard for assessing LLM capabilities efficiently. A variety of metrics can be employed for this task. However, previous research has not conducted a thorough assessment of them. At the same time, MCQ evaluation suffers from answer fluctuation: models produce different results given slight changes in prompts. We suggest a metric assessment protocol in which evaluation methodologies are analyzed through their connection with fluctuation rates, as well as original performance. Our results show that there is a strong link between existing metrics and the answer changing, even when computed without any additional prompt variants. A novel metric, worst accuracy, demonstrates the highest association on the protocol.