🤖 AI Summary
Despite high test coverage, real-world projects still fail to detect all defects—a limitation evidenced by numerous unresolved issues in open-source issue trackers. This paper presents the first systematic investigation of regression testing’s dual role in debugging newly introduced defects and validating patches: (1) aiding the generation of reproducible test cases, and (2) ensuring patches do not induce functional regressions. To address LLM context-length limitations, noise from large test suites, and excessive inference overhead, we propose TestPrune—an automated test suite pruning technique that performs lightweight, precise context optimization via problem-test relevance analysis. TestPrune seamlessly integrates into LLM-driven automated program repair pipelines. Evaluated on the SWE-Bench benchmark, our approach improves both bug reproduction and resolution rates by 6.2–12.9%, with only a marginal increase in per-invocation cost ($0.02–$0.05).
📝 Abstract
Test suites in real-world projects are often large and achieve high code coverage, yet they remain insufficient for detecting all bugs. The abundance of unresolved issues in open-source project trackers highlights this gap. While regression tests are typically designed to ensure past functionality is preserved in the new version, they can also serve a complementary purpose: debugging the current version. Specifically, regression tests can (1) enhance the generation of reproduction tests for newly reported issues, and (2) validate that patches do not regress existing functionality. We present TestPrune, a fully automated technique that leverages issue tracker reports and strategically reuses regression tests for both bug reproduction and patch validation.
A key contribution of TestPrune is its ability to automatically minimize the regression suite to a small, highly relevant subset of tests. Due to the predominance of LLM-based debugging techniques, this minimization is essential as large test suites exceed context limits, introduce noise, and inflate inference costs. TestPrune can be plugged into any agentic bug repair pipeline and orthogonally improve overall performance. As a proof of concept, we show that TestPrune leads to a 6.2%-9.0% relative increase in issue reproduction rate within the Otter framework and a 9.4% - 12.9% relative increase in issue resolution rate within the Agentless framework on SWE-Bench Lite and SWE-Bench Verified benchmarks, capturing fixes that were correctly produced by agents but not submitted as final patches. Compared to the benefits, the cost overhead of using TestPrune is minimal, i.e., $0.02 and $0.05 per SWE-Bench instance, using GPT-4o and Claude-3.7-Sonnet models, respectively.