LLMEval-3: A Large-Scale Longitudinal Study on Robust and Fair Evaluation of Large Language Models

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluations rely heavily on static benchmarks, rendering them vulnerable to data contamination and leaderboard overfitting, thus failing to reflect models’ true capabilities. Method: We propose the first dynamic evaluation framework designed for long-term evolution: (1) a 220K-item graduate-level question bank enabling dynamic sampling of unseen test sets per run; (2) an anti-cheating model architecture, contamination-resistant data curation, and a calibrated LLM-as-a-judge system achieving 90% inter-annotator agreement; and (3) relative ranking to mitigate absolute scoring bias. Contribution/Results: Over 20 months, we evaluated nearly 50 mainstream models, uncovering—for the first time in longitudinal assessment—the knowledge retention bottleneck and contamination blind spots in LLMs. Our framework significantly improves ranking stability and evaluation reliability, surpassing the performance ceiling identification limits inherent to static benchmarks.

Technology Category

Application Category

📝 Abstract
Existing evaluation of Large Language Models (LLMs) on static benchmarks is vulnerable to data contamination and leaderboard overfitting, critical issues that obscure true model capabilities. To address this, we introduce LLMEval-3, a framework for dynamic evaluation of LLMs. LLMEval-3 is built on a proprietary bank of 220k graduate-level questions, from which it dynamically samples unseen test sets for each evaluation run. Its automated pipeline ensures integrity via contamination-resistant data curation, a novel anti-cheating architecture, and a calibrated LLM-as-a-judge process achieving 90% agreement with human experts, complemented by a relative ranking system for fair comparison. An 20-month longitudinal study of nearly 50 leading models reveals a performance ceiling on knowledge memorization and exposes data contamination vulnerabilities undetectable by static benchmarks. The framework demonstrates exceptional robustness in ranking stability and consistency, providing strong empirical validation for the dynamic evaluation paradigm. LLMEval-3 offers a robust and credible methodology for assessing the true capabilities of LLMs beyond leaderboard scores, promoting the development of more trustworthy evaluation standards.
Problem

Research questions and friction points this paper is trying to address.

Addresses data contamination in static LLM evaluations
Introduces dynamic sampling for fair model comparison
Ensures evaluation integrity via anti-cheating measures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sampling of unseen test sets
Contamination-resistant data curation
Calibrated LLM-as-a-judge process
🔎 Similar Papers
No similar papers found.
M
Ming Zhang
Fudan University, Shanghai, China
Y
Yujiong Shen
Fudan University, Shanghai, China
J
Jingyi Deng
Fudan University, Shanghai, China
Y
Yuhui Wang
Fudan University, Shanghai, China
Y
Yue Zhang
Fudan University, Shanghai, China
J
Junzhe Wang
Fudan University, Shanghai, China
Shichun Liu
Shichun Liu
Fudan University
NLP
Shihan Dou
Shihan Dou
Fudan University
LLMsCode LMsRLAlignment
H
Huayu Sha
Fudan University, Shanghai, China
Q
Qiyuan Peng
Fudan University, Shanghai, China
C
Changhao Jiang
Fudan University, Shanghai, China
J
Jingqi Tong
Fudan University, Shanghai, China
Yilong Wu
Yilong Wu
Fudan University
Natural Language Processing
Z
Zhihao Zhang
Fudan University, Shanghai, China
Mingqi Wu
Mingqi Wu
Director of Data Science, Microsoft
AIMachine LearningStatisticsdata science
Zhiheng Xi
Zhiheng Xi
Fudan University
LLM ReasoningLLM-based Agents
Mingxu Chai
Mingxu Chai
Fudan University
T
Tao Liang
ByteDance, Beijing, China
Z
Zhihui Fei
ByteDance, Beijing, China
Z
Zhen Wang
ByteDance, Beijing, China
M
Mingyang Wan
ByteDance, Beijing, China
G
Guojun Ma
ByteDance, Beijing, China
T
Tao Gui
Fudan University, Shanghai, China
Q
Qi Zhang
Fudan University, Shanghai, China
X
Xuanjing Huang
Fudan University, Shanghai, China