Rethinking the effects of data contamination in Code Intelligence

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates how data contamination affects the performance evaluation of pre-trained language models (PLMs; e.g., RoBERTa, GPT-2) and large language models (LLMs; e.g., LLaMA, StarCoder) on code intelligence tasks—namely translation, generation, and summarization. We design controlled experiments across four contamination scenarios: input-only, output-only, unpaired, and paired contamination, covering both pretraining–finetuning–inference and direct inference paradigms. Key findings reveal that paired contamination induces significant performance inflation only in LLMs under direct inference or PLMs under minimal finetuning; unpaired and single-sided contamination exert negligible effects; and PLMs exhibit robustness under standard finetuning, whereas LLMs—relying heavily on contextual pairing—are more vulnerable. Crucially, this work provides the first empirical evidence that contamination does not necessarily lead to overestimation—a counterintuitive insight that challenges conventional evaluation assumptions. It establishes new benchmarks and practical guidelines for trustworthy evaluation and secure deployment of code models.

Technology Category

Application Category

📝 Abstract
In recent years, code intelligence has gained increasing importance in the field of automated software engineering. Meanwhile, the widespread adoption of Pretrained Language Models (PLMs) and Large Language Models (LLMs) has raised concerns regarding data contamination and its potential impact on model performance evaluation. This paper presents a systematic empirical study to investigate the fine-grained data contamination on code intelligence tasks. Our study involves diverse representative PLMs, namely RoBERTa and GPT-2, and LLMs, namely LLaMA and StarCoder, covering three major tasks: code translation, code generation, and code summarization. We categorize contamination scenarios into four types according to the code intelligence practice, namely input-only, output-only, unpaired, and paired contamination settings, and construct corresponding experimental and control groups for exploration. Experimental results show that, under the pre-training, fine-tuning, and inference paradigm adopted by PLMs, even deliberately injecting paired contamination does not lead to significant performance overestimation. But direct inference or small-scale fine-tuning uncovers the contamination effects. In contrast, LLMs with pre-training and inference paradigm are significantly affected by the paired contamination. Apart from the above, other contamination scenarios have no impact on both PLMs and LLMs. Our findings challenge the conventional belief that contamination inevitably leads to performance overestimation, providing new insights into the evaluation and deployment of code intelligence models.
Problem

Research questions and friction points this paper is trying to address.

Investigates data contamination effects on code intelligence tasks
Evaluates PLMs and LLMs under varied contamination scenarios
Challenges belief that contamination always causes performance overestimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic study on fine-grained data contamination
Diverse PLMs and LLMs for code intelligence tasks
Four contamination scenarios with experimental validation
🔎 Similar Papers
No similar papers found.
Z
Zhen Yang
School of Computer Science and Technology, Shandong University, Qingdao, China
H
Hongyi Lin
School of Computer Science and Technology, Shandong University, Qingdao, China
Y
Yifan He
School of Computer Science and Technology, Shandong University, Qingdao, China
J
Jie Xu
School of Computer Science and Technology, Shandong University, Qingdao, China
Z
Zeyu Sun
Institute of Software, Chinese Academy of Sciences, Beijing, China
S
Shuo Liu
Department of Computer Science, City University of Hong Kong, Hong Kong, China
P
Pengpeng Wang
Department of Computer Science, Columbia University, New York, USA
Zhongxing Yu
Zhongxing Yu
Shandong University
Programming LanguageFormal MethodsSoftware Engineering
Q
Qing-Lin Liang
School of Computer Science, Peking University, Beijing, China