🤖 AI Summary
This study systematically investigates how data contamination affects the performance evaluation of pre-trained language models (PLMs; e.g., RoBERTa, GPT-2) and large language models (LLMs; e.g., LLaMA, StarCoder) on code intelligence tasks—namely translation, generation, and summarization. We design controlled experiments across four contamination scenarios: input-only, output-only, unpaired, and paired contamination, covering both pretraining–finetuning–inference and direct inference paradigms. Key findings reveal that paired contamination induces significant performance inflation only in LLMs under direct inference or PLMs under minimal finetuning; unpaired and single-sided contamination exert negligible effects; and PLMs exhibit robustness under standard finetuning, whereas LLMs—relying heavily on contextual pairing—are more vulnerable. Crucially, this work provides the first empirical evidence that contamination does not necessarily lead to overestimation—a counterintuitive insight that challenges conventional evaluation assumptions. It establishes new benchmarks and practical guidelines for trustworthy evaluation and secure deployment of code models.
📝 Abstract
In recent years, code intelligence has gained increasing importance in the field of automated software engineering. Meanwhile, the widespread adoption of Pretrained Language Models (PLMs) and Large Language Models (LLMs) has raised concerns regarding data contamination and its potential impact on model performance evaluation. This paper presents a systematic empirical study to investigate the fine-grained data contamination on code intelligence tasks. Our study involves diverse representative PLMs, namely RoBERTa and GPT-2, and LLMs, namely LLaMA and StarCoder, covering three major tasks: code translation, code generation, and code summarization. We categorize contamination scenarios into four types according to the code intelligence practice, namely input-only, output-only, unpaired, and paired contamination settings, and construct corresponding experimental and control groups for exploration. Experimental results show that, under the pre-training, fine-tuning, and inference paradigm adopted by PLMs, even deliberately injecting paired contamination does not lead to significant performance overestimation. But direct inference or small-scale fine-tuning uncovers the contamination effects. In contrast, LLMs with pre-training and inference paradigm are significantly affected by the paired contamination. Apart from the above, other contamination scenarios have no impact on both PLMs and LLMs. Our findings challenge the conventional belief that contamination inevitably leads to performance overestimation, providing new insights into the evaluation and deployment of code intelligence models.