Metric assessment protocol in the context of answer fluctuation on MCQ tasks

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit answer instability in multiple-choice question (MCQ) evaluation—minor prompt perturbations cause substantial prediction shifts, undermining metric reliability. Method: We propose a novel evaluation protocol that quantitatively links evaluation metrics to answer volatility. Central to our approach is “Worst-case Accuracy” (WCA), a stability-oriented metric defined as the minimum accuracy across a diverse set of semantically equivalent prompt variants. We complement WCA with multi-metric comparison, systematic prompt perturbation testing, volatility modeling, and statistical correlation analysis. Contribution/Results: Experiments reveal that conventional metrics (e.g., standard accuracy) are highly sensitive to prompt variations, whereas WCA demonstrates superior consistency and robustness under diverse perturbations. It achieves significantly higher rank correlation with human judgments of model reliability and exhibits stronger statistical stability across model families and question domains. This work establishes the first principled, quantitative framework for assessing MCQ evaluation robustness, offering a new paradigm that jointly optimizes validity and resilience.

Technology Category

Application Category

📝 Abstract
Using multiple-choice questions (MCQs) has become a standard for assessing LLM capabilities efficiently. A variety of metrics can be employed for this task. However, previous research has not conducted a thorough assessment of them. At the same time, MCQ evaluation suffers from answer fluctuation: models produce different results given slight changes in prompts. We suggest a metric assessment protocol in which evaluation methodologies are analyzed through their connection with fluctuation rates, as well as original performance. Our results show that there is a strong link between existing metrics and the answer changing, even when computed without any additional prompt variants. A novel metric, worst accuracy, demonstrates the highest association on the protocol.
Problem

Research questions and friction points this paper is trying to address.

Assessing metrics for LLM performance on MCQs
Addressing answer fluctuation in MCQ evaluations
Proposing worst accuracy as a robust metric
Innovation

Methods, ideas, or system contributions that make the work stand out.

Assessing metrics via fluctuation rates link
Introducing worst accuracy as novel metric
Evaluating MCQ performance without prompt variants
🔎 Similar Papers
No similar papers found.