CodeHacker: Automated Test Case Generation for Detecting Vulnerabilities in Competitive Programming Solutions

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient coverage of boundary and subtle error cases in existing code generation benchmarks, which often misclassify incorrect solutions as correct. To mitigate this, the authors propose an agent-based automated adversarial testing framework that emulates the “Hack” mechanism from competitive programming. The framework employs multiple strategies—including stress testing, anti-hash attacks, and logic-directed attacks—to generate targeted adversarial test cases. It further incorporates a self-calibration phase, where self-generated probes iteratively refine the validator and checker components. Experimental results demonstrate that the approach significantly improves the true negative rate (TNR), effectively identifying previously misjudged erroneous programs. Moreover, the generated test cases, when used as training data, enhance the performance of reinforcement learning models on benchmarks such as LiveCodeBench.

Technology Category

Application Category

📝 Abstract
The evaluation of Large Language Models (LLMs) for code generation relies heavily on the quality and robustness of test cases. However, existing benchmarks often lack coverage for subtle corner cases, allowing incorrect solutions to pass. To bridge this gap, we propose CodeHacker, an automated agent framework dedicated to generating targeted adversarial test cases that expose latent vulnerabilities in program submissions. Mimicking the hack mechanism in competitive programming, CodeHacker employs a multi-strategy approach, including stress testing, anti-hash attacks, and logic-specific targeting to break specific code submissions. To ensure the validity and reliability of these attacks, we introduce a Calibration Phase, where the agent iteratively refines its own Validator and Checker via self-generated adversarial probes before evaluating contestant code.Experiments demonstrate that CodeHacker significantly improves the True Negative Rate (TNR) of existing datasets, effectively filtering out incorrect solutions that were previously accepted. Furthermore, generated adversarial cases prove to be superior training data, boosting the performance of RL-trained models on benchmarks like LiveCodeBench.
Problem

Research questions and friction points this paper is trying to address.

test case generation
vulnerability detection
competitive programming
adversarial testing
code evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial test case generation
automated code evaluation
competitive programming hack
calibration phase
large language models
🔎 Similar Papers
No similar papers found.
Jingwei Shi
Jingwei Shi
Shanghai University of Finance and Economics
Deep LearningLLMMLLMAgent
X
Xinxiang Yin
Northwestern Polytechnical University
J
Jing Huang
Meituan
J
Jinman Zhao
University of Toronto
S
Shengyu Tao
Shanghai University of Finance and Economics