How well LLM-based test generation techniques perform with newer LLM versions?

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unclear efficacy of existing large language model (LLM)-based test generation methods when evaluated on contemporary LLMs, as prior work often relies on outdated models and weak prompting strategies. The authors systematically reproduce four state-of-the-art tools—HITS, SymPrompt, TestSpark, and CoverUp—and integrate them with up-to-date LLMs, evaluating their performance against a simple prompting baseline across 393 classes and 3,657 methods. Results demonstrate that basic prompting alone surpasses current state-of-the-art approaches by substantial margins: +17.72% in line coverage, +19.80% in branch coverage, and +20.92% in mutation score, while using a comparable number of LLM queries. Furthermore, the paper introduces a hierarchical test generation strategy that maintains this high performance while reducing LLM requests by approximately 20%.

Technology Category

Application Category

📝 Abstract
The rapid evolution of Large Language Models (LLMs) has strongly impacted software engineering, leading to a growing number of studies on automated unit test generation. However, the standalone use of LLMs without post-processing has proven insufficient, often producing tests that fail to compile or achieve high coverage. Several techniques have been proposed to address these issues, reporting improvements in test compilation and coverage. While important, LLM-based test generation techniques have been evaluated against relatively weak baselines (for todays'standards), i.e., old LLM versions and relatively weak prompts, which may exacerbate the performance contribution of the approaches. In other words, stronger (newer) LLMs may obviate any advantage these techniques bring. We investigate this issue by replicating four state-of-the-art LLM-based test generation tools, HITS, SymPrompt, TestSpark, and CoverUp that include engineering components aimed at guiding the test generation process through compilation and execution feedback, and evaluate their relative effectiveness and efficiency over a plain LLM test generation method. We integrate current LLM versions in all approaches and run an experiment on 393 classes and 3,657 methods. Our results show that the plain LLM approach can outperform previous state-of-the-art approaches in all test effectiveness metrics we used: line coverage (by 17.72%), branch coverage (by 19.80%) and mutation score (by 20.92%), and it does so at a comparable cost (LLM queries). We also observe that the granularity at which the plain LLM is applied has a significant impact on the cost. We therefore propose targeting first the program classes, where test generation is more efficient, and then the uncovered methods to reduce the number of LLM requests. This strategy achieves comparable (slightly higher) effectiveness while requiring about 20% fewer LLM requests.
Problem

Research questions and friction points this paper is trying to address.

LLM-based test generation
large language models
unit test generation
test coverage
software testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based test generation
empirical evaluation
test coverage
cost efficiency
prompt engineering
🔎 Similar Papers
No similar papers found.