AssertFlip: Reproducing Bugs via Inversion of LLM-Generated Passing Tests

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In open-source and industrial settings, most bug reports lack executable reproduction tests, severely hindering debugging efficiency. To address this, we propose an LLM-based defect reproduction method: first generating passing tests—leveraging LLMs’ strength in synthesizing correct program logic—and then transforming them into failing tests via assertion flipping to precisely trigger the reported defect. This two-stage approach circumvents the significant challenge of directly generating failing tests. Evaluated on the SWT-Bench benchmark, our method achieves a 43.6% failure reproduction rate on the verified subset (SWT-Bench-Verified), substantially outperforming prior state-of-the-art techniques. Our work establishes a novel paradigm for automated defect diagnosis by enabling reliable, LLM-driven generation of executable failing test cases from natural-language bug reports.

Technology Category

Application Category

📝 Abstract
Bug reproduction is critical in the software debugging and repair process, yet the majority of bugs in open-source and industrial settings lack executable tests to reproduce them at the time they are reported, making diagnosis and resolution more difficult and time-consuming. To address this challenge, we introduce AssertFlip, a novel technique for automatically generating Bug Reproducible Tests (BRTs) using large language models (LLMs). Unlike existing methods that attempt direct generation of failing tests, AssertFlip first generates passing tests on the buggy behaviour and then inverts these tests to fail when the bug is present. We hypothesize that LLMs are better at writing passing tests than ones that crash or fail on purpose. Our results show that AssertFlip outperforms all known techniques in the leaderboard of SWT-Bench, a benchmark curated for BRTs. Specifically, AssertFlip achieves a fail-to-pass success rate of 43.6% on the SWT-Bench-Verified subset.
Problem

Research questions and friction points this paper is trying to address.

Automatically generating bug reproducible tests using LLMs
Inverting passing tests to fail when bugs are present
Improving bug diagnosis and resolution efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates passing tests using LLMs first
Inverts passing tests to fail for bugs
Outperforms existing techniques in benchmarks
🔎 Similar Papers
No similar papers found.