🤖 AI Summary
Existing automated program repair (APR) methods rely on simplistic code representations, limiting their compatibility with state-of-the-art large language models (LLMs) and thus constraining repair capability and generalization. To address this, we propose a semantics-aware, task-specific code representation paradigm and—first in APR—systematically adapt parameter-efficient fine-tuning (PEFT) techniques (e.g., LoRA) to construct a lightweight, robust repair adapter. We further adapt the LLaMA architecture and perform instruction tuning tailored to APR. Our method repairs 144, 109, and 20 real-world bugs on Defects4J v2, HumanEval-Java, and GitBug-Java, respectively, outperforming all existing baselines and demonstrating strong cross-distribution generalization. This work establishes the first PEFT-driven APR framework that jointly achieves expressive representational power, architectural scalability, and empirical effectiveness.
📝 Abstract
Automated Program Repair (APR) has evolved significantly with the advent of Large Language Models (LLMs). Fine-tuning LLMs for program repair is a recent avenue of research, with many dimensions which have not been explored. Existing work mostly fine-tune LLMs with naive code representations and does not scale to frontier models. To address this problem, we propose RepairLLaMA, a novel program repair approach that 1) identifies optimal code representations for APR with fine-tuned models, and 2) pioneers state-of-the-art parameter-efficient fine-tuning technique (PEFT) for program repair. This results in RepairLLaMA producing a highly effective ‘program repair adapter’ for fixing bugs with AI. Our experiments demonstrate the validity of both concepts. First, fine-tuning adapters with program repair specific code representations enables the model to use meaningful repair signals and produce better patches. Second, parameter-efficient fine-tuning helps fine-tuning to converge and clearly contributes to the effectiveness of RepairLLaMA in fixing bugs outside the fine-tuning data distribution. Overall, RepairLLaMA correctly fixes 144 Defects4J v2, 109 HumanEval-Java, and 20 GitBug-Java bugs, outperforming all baselines.