🤖 AI Summary
In large language model (LLM)-based code generation, insufficient unit test coverage leads to low-quality reward signals and unreliable validation. Method: This paper proposes dynamic test suite expansion to improve reward modeling accuracy. We empirically establish a strong positive correlation between unit test count and reward signal quality—first such evidence—and accordingly design CodeRM-8B, a lightweight, code-specific reward model. We further introduce a difficulty-aware dynamic test scheduling mechanism and an LLM-driven multi-candidate solution verification framework. Results: Evaluated on three major benchmarks—including HumanEval Plus—our approach significantly improves pass@1 accuracy: +18.43% for Llama3-8B and +3.42% for GPT-4o-mini. These results demonstrate that adaptive test scale expansion is critical for enhancing the fidelity and reliability of feedback in code generation.
📝 Abstract
Current large language models (LLMs) often struggle to produce accurate responses on the first attempt for complex reasoning tasks like code generation. Prior research tackles this challenge by generating multiple candidate solutions and validating them with LLM-generated unit tests. The execution results of unit tests serve as reward signals to identify correct solutions. As LLMs always confidently make mistakes, these unit tests are not reliable, thereby diminishing the quality of reward signals. Motivated by the observation that scaling the number of solutions improves LLM performance, we explore the impact of scaling unit tests to enhance reward signal quality. Our pioneer experiment reveals a positive correlation between the number of unit tests and reward signal quality, with greater benefits observed in more challenging problems. Based on these insights, we propose CodeRM-8B, a lightweight yet effective unit test generator that enables efficient and high-quality unit test scaling. Additionally, we implement a dynamic scaling mechanism that adapts the number of unit tests based on problem difficulty, further improving efficiency. Experimental results show that our approach significantly improves performance across various models on three benchmarks (e.g., with gains of 18.43% for Llama3-8B and 3.42% for GPT-4o-mini on HumanEval Plus).