๐ค AI Summary
LLMs frequently generate code containing subtle, hard-to-diagnose logical errors; existing self-repair methods rely on static analysis or shallow execution logs, lacking human-like interactive, dynamic debugging capabilities. This paper introduces InspectCoderโthe first intelligent code repair system enabling LLMs to perform breakpoint setting, runtime state inspection, and incremental experimentation via real debuggers, shifting the paradigm from trial-and-error to root-cause localization. Its key contributions are: (1) a novel LLM-debugger collaborative agent framework supporting stateful, adaptive dynamic analysis; (2) real-time debugging feedback integrated as a process reward to guide multi-step reasoning optimization; and (3) seamless integration with the open-source middleware InspectWare, ensuring compatibility with mainstream Python testing frameworks. On BigCodeBench-R and LiveCodeBench-R, InspectCoder achieves 5.10โ60.37 percentage points higher repair accuracy than the strongest baseline, while improving efficiency by 1.67รโ2.24ร.
๐ Abstract
Large Language Models (LLMs) frequently generate buggy code with complex logic errors that are challenging to diagnose. While existing LLM-based self-repair approaches conduct intensive static semantic analysis or reply on superficial execution logs, they miss the in-depth runtime behaviors that often expose bug root causes-lacking the interactive dynamic analysis capabilities that make human debugging effective. We present InspectCoder, the first agentic program repair system that empowers LLMs to actively conduct dynamic analysis via interactive debugger control. Our dual-agent framework enables strategic breakpoint placement, targeted state inspection, and incremental runtime experimentation within stateful debugger sessions. Unlike existing methods that follow fixed log collection procedures, InspectCoder adaptively inspects and perturbs relevant intermediate states at runtime, and leverages immediate process rewards from debugger feedback to guide multi-step reasoning, transforming LLM debugging paradigm from blind trial-and-error into systematic root cause diagnosis. We conduct comprehensive experiments on two challenging self-repair benchmarks: BigCodeBench-R and LiveCodeBench-R. InspectCoder achieves 5.10%-60.37% relative improvements in repair accuracy over the strongest baseline, while delivering 1.67x-2.24x superior bug-fix efficiency respectively. We also contribute InspectWare, an open-source middleware that abstracts debugger complexities and maintains stateful debugging sessions across mainstream Python testing frameworks. Our work provides actionable insight into the interactive LLM-debugger systems, demonstrating the significant potential of LLM-driven dynamic analysis for automated software engineering.