🤖 AI Summary
This work addresses the insufficient coverage of boundary and subtle error cases in existing code generation benchmarks, which often misclassify incorrect solutions as correct. To mitigate this, the authors propose an agent-based automated adversarial testing framework that emulates the “Hack” mechanism from competitive programming. The framework employs multiple strategies—including stress testing, anti-hash attacks, and logic-directed attacks—to generate targeted adversarial test cases. It further incorporates a self-calibration phase, where self-generated probes iteratively refine the validator and checker components. Experimental results demonstrate that the approach significantly improves the true negative rate (TNR), effectively identifying previously misjudged erroneous programs. Moreover, the generated test cases, when used as training data, enhance the performance of reinforcement learning models on benchmarks such as LiveCodeBench.
📝 Abstract
The evaluation of Large Language Models (LLMs) for code generation relies heavily on the quality and robustness of test cases. However, existing benchmarks often lack coverage for subtle corner cases, allowing incorrect solutions to pass. To bridge this gap, we propose CodeHacker, an automated agent framework dedicated to generating targeted adversarial test cases that expose latent vulnerabilities in program submissions. Mimicking the hack mechanism in competitive programming, CodeHacker employs a multi-strategy approach, including stress testing, anti-hash attacks, and logic-specific targeting to break specific code submissions. To ensure the validity and reliability of these attacks, we introduce a Calibration Phase, where the agent iteratively refines its own Validator and Checker via self-generated adversarial probes before evaluating contestant code.Experiments demonstrate that CodeHacker significantly improves the True Negative Rate (TNR) of existing datasets, effectively filtering out incorrect solutions that were previously accepted. Furthermore, generated adversarial cases prove to be superior training data, boosting the performance of RL-trained models on benchmarks like LiveCodeBench.