LLM-Powered Test Case Generation for Detecting Bugs in Plausible Programs

📅 2024-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting subtle bugs in “plausible programs”—programs that pass all test cases yet contain latent defects. We propose TrickCatcher, a novel LLM-based framework featuring a three-stage collaborative detection paradigm: (1) semantic-preserving program mutation via LLMs; (2) specification-driven symbolic input generation guided by natural-language specifications; and (3) cross-version behavioral consistency checking to identify anomalous executions. Our key contribution is the first decoupling and joint verification of program mutation and specification-grounded input generation. Evaluated on TrickyBugs and EvalPlus, TrickCatcher achieves recall, precision, and F1 scores 1.80×, 2.65×, and 1.66× higher than state-of-the-art baselines, respectively. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Detecting tricky bugs in plausible programs, those that pass existing test suites yet still contain bugs, remains a significant challenge in software testing. To address this problem, we propose TrickCatcher, an LLM-powered approach to generating test cases for uncovering bugs in plausible programs. TrickCatcher operates in three stages: First, it uses an LLM to generate program variants based on the program under test (PUT) and its specification. Second, it employs an LLM to construct an input generator from the specification for producing test inputs. Finally, these inputs are executed on both the PUT and its program variants to detect inconsistencies in their outputs. We evaluate TrickCatcher on two datasets, TrickyBugs and EvalPlus, which include 366 human-written and 151 AI-generated plausible programs with tricky bugs. TrickCatcher achieves recall, precision, and F1 scores that are 1.80x, 2.65x, and 1.66x those of the state-of-the-art baselines, respectively. Code and data used are available at https://github.com/RinCloud/TrickCatcher.
Problem

Research questions and friction points this paper is trying to address.

Detecting tricky bugs in plausible programs passing existing tests
Generating test cases using LLMs to uncover hidden bugs
Improving recall and precision in bug detection over baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM generates program variants for testing
LLM constructs input generator from specifications
Detects inconsistencies between PUT and variants
🔎 Similar Papers
No similar papers found.