🤖 AI Summary
This work addresses the challenge of detecting subtle bugs in “plausible programs”—programs that pass all test cases yet contain latent defects. We propose TrickCatcher, a novel LLM-based framework featuring a three-stage collaborative detection paradigm: (1) semantic-preserving program mutation via LLMs; (2) specification-driven symbolic input generation guided by natural-language specifications; and (3) cross-version behavioral consistency checking to identify anomalous executions. Our key contribution is the first decoupling and joint verification of program mutation and specification-grounded input generation. Evaluated on TrickyBugs and EvalPlus, TrickCatcher achieves recall, precision, and F1 scores 1.80×, 2.65×, and 1.66× higher than state-of-the-art baselines, respectively. All code and data are publicly released.
📝 Abstract
Detecting tricky bugs in plausible programs, those that pass existing test suites yet still contain bugs, remains a significant challenge in software testing. To address this problem, we propose TrickCatcher, an LLM-powered approach to generating test cases for uncovering bugs in plausible programs. TrickCatcher operates in three stages: First, it uses an LLM to generate program variants based on the program under test (PUT) and its specification. Second, it employs an LLM to construct an input generator from the specification for producing test inputs. Finally, these inputs are executed on both the PUT and its program variants to detect inconsistencies in their outputs. We evaluate TrickCatcher on two datasets, TrickyBugs and EvalPlus, which include 366 human-written and 151 AI-generated plausible programs with tricky bugs. TrickCatcher achieves recall, precision, and F1 scores that are 1.80x, 2.65x, and 1.66x those of the state-of-the-art baselines, respectively. Code and data used are available at https://github.com/RinCloud/TrickCatcher.