Toward a unified framework for data-efficient evaluation of large language models

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation methods are costly, while classical Item Response Theory (IRT) models support only binary scoring and are confined to single benchmarks, failing to characterize cross-task ability structures. Method: We propose LEGO-IRT—the first unified IRT framework jointly modeling binary and continuous responses—by factorizing ability representations into general and structure-specific components, explicitly capturing correlations across multiple benchmarks and evaluation metrics; it integrates Bayesian inference with multi-task learning to substantially reduce data requirements. Contribution/Results: Experiments across 70 LLMs and 5 benchmarks demonstrate that LEGO-IRT achieves stable ability estimation using only 3% of items, reduces estimation error by up to 10%, and yields ability scores better aligned with human preferences compared to prior approaches.

Technology Category

Application Category

📝 Abstract
Evaluating large language models (LLMs) on comprehensive benchmarks is a cornerstone of their development, yet it's often computationally and financially prohibitive. While Item Response Theory (IRT) offers a promising path toward data-efficient evaluation by disentangling model capability from item difficulty, existing IRT-based methods are hampered by significant limitations. They are typically restricted to binary correctness metrics, failing to natively handle the continuous scores used in generative tasks, and they operate on single benchmarks, ignoring valuable structural knowledge like correlations across different metrics or benchmarks. To overcome these challenges, we introduce LEGO-IRT, a unified and flexible framework for data-efficient LLM evaluation. LEGO-IRT's novel design natively supports both binary and continuous evaluation metrics. Moreover, it introduces a factorized architecture to explicitly model and leverage structural knowledge, decomposing model ability estimates into a general component and structure-specific (e.g., per-metric or per-benchmark) components. Through extensive experiments involving $70$ LLMs across $5$ benchmarks, we show that LEGO-IRT achieves stable capability estimates using just $3%$ of the total evaluation items. We demonstrate that incorporating structural knowledge reduces estimation error by up to $10%$ and reveal that the latent abilities estimated by our framework may align more closely with human preferences.
Problem

Research questions and friction points this paper is trying to address.

Addressing computational costs in large language model evaluation
Overcoming limitations of binary metrics in generative tasks
Integrating structural knowledge across multiple benchmarks efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

LEGO-IRT supports binary and continuous metrics
Factorized architecture models structural knowledge across benchmarks
Achieves stable capability estimates with minimal evaluation items
🔎 Similar Papers
No similar papers found.
L
Lele Liao
Fudan University
Q
Qile Zhang
Shanghai Jiao Tong University
R
Ruofan Wu
Fudan University
Guanhua Fang
Guanhua Fang
Assistant professor, Fudan University
StatisticsMachine learning