MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair

📅 2024-08-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenge of multi-task generalization and computational efficiency in Automated Program Repair (APR) with Code Large Language Models (Code LLMs), this paper proposes a novel adapter-oriented continual fusion mechanism. Unlike conventional uniform or static-weighted adapter merging, our approach introduces task ordering and dynamic weighting into adapter composition—a first in Code LLM adaptation. Leveraging parameter-efficient fine-tuning, we empirically validate the method on CodeLlama: under optimal task sequences, continual fusion improves repair success rates by up to 12.7% over single-task adapters, significantly enhancing cross-task generalization while reducing inference overhead. Our core contribution lies in establishing the critical roles of task sequence and dynamic weight assignment in adapter fusion—thereby overcoming fundamental limitations of existing fusion paradigms.

Technology Category

Application Category

📝 Abstract
[Context] Large Language Models (LLMs) have shown good performance in several software development-related tasks such as program repair, documentation, code refactoring, debugging, and testing. Adapters are specialized, small modules designed for parameter efficient fine-tuning of LLMs for specific tasks, domains, or applications without requiring extensive retraining of the entire model. These adapters offer a more efficient way to customize LLMs for particular needs, leveraging the pre-existing capabilities of the large model. Merging LLMs and adapters has shown promising results for various natural language domains and tasks, enabling the use of the learned models and adapters without additional training for a new task. [Objective] This research proposes continual merging and empirically studies the capabilities of merged adapters in Code LLMs, specially for the Automated Program Repair (APR) task. The goal is to gain insights into whether and how merging task-specific adapters can affect the performance of APR. [Method] In our framework, MergeRepair, we plan to merge multiple task-specific adapters using three different merging methods and evaluate the performance of the merged adapter for the APR task. Particularly, we will employ two main merging scenarios for all three techniques, (i) merging using equal-weight averaging applied on parameters of different adapters, where all adapters are of equal importance; and (ii) our proposed approach, continual merging, in which we sequentially merge the task-specific adapters and the order and weight of merged adapters matter. By exploratory study of merging techniques, we will investigate the improvement and generalizability of merged adapters for APR. Through continual merging, we will explore the capability of merged adapters and the effect of task order, as it occurs in real-world software projects.
Problem

Research questions and friction points this paper is trying to address.

Investigates merging task-specific adapters in code LLMs for automated program repair
Explores effectiveness of merged adapters in software engineering tasks
Compares performance of different merging methods for adapter integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Merging task-specific adapters for program repair
Using weight-averaging, ties, and dare-ties methods
Introducing continual merging with ordered adapters
🔎 Similar Papers
No similar papers found.
M
Meghdad Dehghan
Department of Computer Science, University of British Columbia, Kelowna, Canada
J
Jie Jw Wu
Department of Computer Science, University of British Columbia, Kelowna, Canada
F
Fatemeh H. Fard
Department of Computer Science, University of British Columbia, Kelowna, Canada
Ali Ouni
Ali Ouni
Department of Software and IT Engineering, ETS Montreal, University of Quebec, Montreal, Canada