Seed&Steer: Guiding Large Language Models with Compilable Prefix and Branch Signals for Unit Test Generation

πŸ“… 2025-07-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high compilation failure rate and low test coverage of unit tests generated by large language models (LLMs), this paper proposes a two-stage decoupled generation framework. In Stage I, traditional tools (e.g., EvoSuite) generate compilation-robust test prefixes. In Stage II, LLMs leverage compilable method-call seeds and branch-aware prompting signals to produce diverse, path-diverse assertions covering multiple execution paths. By integrating compilation feedback with path-guided generation, the method achieves a 7% improvement in compilation success rate on five real-world Java projects, successfully fixing 792–887 previously failing test cases. Branch and line coverage reach 73%, representing a 1.09×–1.26Γ— improvement over baseline approaches. The core contributions are the synergistic two-stage coordination mechanism and the branch-aware prompt design, which jointly enhance both syntactic validity and semantic coverage of LLM-generated unit tests.

Technology Category

Application Category

πŸ“ Abstract
Unit tests play a vital role in the software development lifecycle. Recent advances in Large Language Model (LLM)-based approaches have significantly improved automated test generation, garnering attention from both academia and industry. We revisit LLM-based unit test generation from a novel perspective by decoupling prefix generation and assertion generation. To characterize their respective challenges, we define Initialization Complexity and adopt Cyclomatic Complexity to measure the difficulty of prefix and assertion generation, revealing that the former primarily affects compilation success, while the latter influences test coverage. To address these challenges, we propose Seed&Steer, a two-step approach that combines traditional unit testing techniques with the capabilities of large language models. Seed&Steer leverages conventional unit testing tools (e.g., EvoSuite) to generate method invocations with high compilation success rates, which serve as seeds to guide LLMs in constructing effective test contexts. It then introduces branching cues to help LLMs explore diverse execution paths (e.g., normal, boundary, and exception cases) and generate assertions with high coverage. We evaluate Seed&Steer on five real-world Java projects against state-of-the-art baselines. Results show that Seed&Steer improves the compilation pass rate by approximately 7%, successfully compiling 792 and 887 previously failing cases on two LLMs. It also achieves up to ~73% branch and line coverage across focal methods of varying complexity, with coverage improvements ranging from 1.09* to 1.26*. Our code, dataset, and experimental scripts will be publicly released to support future research and reproducibility.
Problem

Research questions and friction points this paper is trying to address.

Improving compilation success in LLM-based unit test generation
Enhancing test coverage through diverse execution path exploration
Combining traditional testing tools with LLMs for effective test contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples prefix and assertion generation for tests
Uses traditional tools to generate compilable seeds
Introduces branching cues for diverse path coverage
πŸ”Ž Similar Papers
No similar papers found.
S
Shuaiyu Zhou
Peking University
Zhengran Zeng
Zhengran Zeng
Peking University
Software EngineeringLLM4Code
X
Xiaoling Zhou
Peking University
R
Rui Xie
Peking University
Shikun Zhang
Shikun Zhang
εŒ—δΊ¬ε€§ε­¦
W
Wei Ye
Peking University