An Exploratory Study on Using Large Language Models for Mutation Testing

📅 2024-06-14
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Prior work lacks systematic empirical evaluation of large language models’ (LLMs) capability to generate high-quality mutants for mutation testing. Method: This paper presents the first large-scale empirical study, evaluating six open- and closed-source LLMs—including GPT-4 and CodeLlama—using multi-strategy prompt engineering on the Defects4J 2.0 and ConDefects Java benchmarks. Contribution/Results: LLM-generated mutants exhibit significantly higher behavioral similarity to real faults and achieve a 93% fault detection rate—19 percentage points higher than traditional rule-based approaches—along with markedly improved diversity. However, LLMs underperform conventional methods in compilation success rate and in producing fewer equivalent or non-viable mutants. This work establishes the first comprehensive empirical foundation and quality assessment framework for LLM-driven intelligent mutation testing.

Technology Category

Application Category

📝 Abstract
Mutation testing is a foundation approach in the software testing field, based on automatically seeded small syntactic changes, known as mutations. The question of how to generate high-utility mutations, to be used for testing purposes, forms a key challenge in mutation testing literature. Large Language Models (LLMs) have shown great potential in code-related tasks but their utility in mutation testing remains unexplored. To this end, we systematically investigate the performance of LLMs in generating effective mutations w.r.t. to their usability, fault detection potential, and relationship with real bugs. In particular, we perform a large-scale empirical study involving six LLMs, including both state-of-the-art open- and closed-source models, and 851 real bugs on two Java benchmarks (i.e., 605 bugs from 12 projects of Defects4J 2.0 and 246 bugs of ConDefects). We find that compared to existing approaches, LLMs generate more diverse mutations that are behaviorally closer to real bugs, which leads to approximately 19% higher fault detection than current approaches (i.e., 93% vs. 74%). Nevertheless, the mutants generated by LLMs have worse compilability rate, useless mutation rate, and equivalent mutation rate than those generated by rule-based approaches. This paper also examines alternative prompt engineering strategies and identifies the root causes of uncompilable mutations, providing insights for researchers to further enhance the performance of LLMs in mutation testing.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' performance in mutation testing comprehensively
Comparing LLM-generated mutants with rule-based approaches' effectiveness
Assessing trade-offs in LLM-based mutation quality and cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate diverse mutants closer to real bugs
LLMs achieve 90.1% higher fault detection rate
LLMs face higher non-compilability and duplication rates
🔎 Similar Papers
No similar papers found.