π€ AI Summary
To address the high compilation failure rate and low test coverage of unit tests generated by large language models (LLMs), this paper proposes a two-stage decoupled generation framework. In Stage I, traditional tools (e.g., EvoSuite) generate compilation-robust test prefixes. In Stage II, LLMs leverage compilable method-call seeds and branch-aware prompting signals to produce diverse, path-diverse assertions covering multiple execution paths. By integrating compilation feedback with path-guided generation, the method achieves a 7% improvement in compilation success rate on five real-world Java projects, successfully fixing 792β887 previously failing test cases. Branch and line coverage reach 73%, representing a 1.09Γβ1.26Γ improvement over baseline approaches. The core contributions are the synergistic two-stage coordination mechanism and the branch-aware prompt design, which jointly enhance both syntactic validity and semantic coverage of LLM-generated unit tests.
π Abstract
Unit tests play a vital role in the software development lifecycle. Recent advances in Large Language Model (LLM)-based approaches have significantly improved automated test generation, garnering attention from both academia and industry. We revisit LLM-based unit test generation from a novel perspective by decoupling prefix generation and assertion generation. To characterize their respective challenges, we define Initialization Complexity and adopt Cyclomatic Complexity to measure the difficulty of prefix and assertion generation, revealing that the former primarily affects compilation success, while the latter influences test coverage. To address these challenges, we propose Seed&Steer, a two-step approach that combines traditional unit testing techniques with the capabilities of large language models. Seed&Steer leverages conventional unit testing tools (e.g., EvoSuite) to generate method invocations with high compilation success rates, which serve as seeds to guide LLMs in constructing effective test contexts. It then introduces branching cues to help LLMs explore diverse execution paths (e.g., normal, boundary, and exception cases) and generate assertions with high coverage. We evaluate Seed&Steer on five real-world Java projects against state-of-the-art baselines. Results show that Seed&Steer improves the compilation pass rate by approximately 7%, successfully compiling 792 and 887 previously failing cases on two LLMs. It also achieves up to ~73% branch and line coverage across focal methods of varying complexity, with coverage improvements ranging from 1.09* to 1.26*. Our code, dataset, and experimental scripts will be publicly released to support future research and reproducibility.