🤖 AI Summary
This work addresses benchmark contamination in LLM evaluation, which conflates rote memorization with genuine capability. We propose TrinEval, a novel trinary evaluation framework that treats contamination as an inherent aspect of the learning process and decouples memory from reasoning via question-type reconstruction. Methodologically, TrinEval integrates performance attribution analysis, MCQ-type re-modeling, and controlled memory-condition experiments. Empirical evaluation on MMLU reveals that ~20.5% of mainstream LLM responses stem from mechanical memorization—and such memorized answers degrade accuracy by an average of −3.7%. TrinEval quantitatively isolates memory effects from authentic reasoning ability, enabling contamination-robust assessment. It establishes a new paradigm for LLM evaluation, shifting focus from mere answer correctness (“how many are right?”) to causal understanding of correct responses (“why is it right?”).
📝 Abstract
Multiple-choice question (MCQ) benchmarks are widely used for evaluating Large Language Models (LLMs), yet their reliability is undermined by benchmark contamination. In this study, we reframe contamination as an inherent aspect of learning and seek to disentangle genuine capability acquisition from superficial memorization in LLM evaluation. First, by analyzing model performance under different memorization conditions, we uncover a counterintuitive trend: LLMs perform worse on memorized MCQs than on non-memorized ones, indicating the coexistence of two distinct learning phenomena, i.e., rote memorization and genuine capability learning. To disentangle them, we propose TrinEval, a novel evaluation framework that reformulates MCQs into an alternative trinity format, reducing memorization while preserving knowledge assessment. Experiments validate TrinEval's effectiveness in reformulation, and its evaluation reveals that common LLMs may memorize by rote 20.5% of knowledge points (in MMLU on average).