๐ค AI Summary
Merging low-rank weight models (e.g., LoRA or SVD-compressed models) often incurs severe performance degradation due to irreversible information loss.
Method: We propose *reversible model merging*, which constructs a compact, orthogonal basis space enabling exact linear reconstruction of each task-specific modelโthereby preserving all original information. The method employs a closed-form solution that jointly optimizes the basis matrix and task-specific coefficients, requiring no auxiliary data or fine-tuning. Crucially, any original model can be losslessly restored on demand.
Contribution/Results: This is the first work achieving *lossless, reversible merging* of low-rank adaptation models. Experiments across multiple datasets and model scales demonstrate substantial improvements over existing merging approaches: reconstructed models achieve performance nearly matching their respective task-specific counterparts, while maintaining high efficiency and broad applicability.
๐ Abstract
Model merging aims to combine multiple fine-tuned models into a single set of weights that performs well across all source tasks. While prior work has shown that merging can approximate the performance of individual fine-tuned models for each task, it largely overlooks scenarios where models are compressed into low-rank representations, either through low-rank adaptation (LoRA) or post-training singular value decomposition (SVD). We first demonstrate that applying conventional merging methods to low-rank weights leads to severe performance degradation in the merged model. Motivated by this phenomenon, we propose a fundamentally different approach: instead of collapsing all adapters into one set of weights, we construct a compact basis (e.g., an equivalent of holding two or more models) from which original task-specific models can be recovered via linear combination. This reframes merging as generating a reconstruction-capable model space rather than producing a single merged model. Crucially, this allows us to ``revert'' to each individual model when needed, recognizing that no merged model can consistently outperform one specialized for its task. Building on this insight, we introduce our method, Reversible Model Merging (RMM), an efficient, data-free, and flexible method that provides a closed-form solution for selecting the optimal basis of model weights and task-specific coefficients for linear combination. Extensive experiments across diverse datasets and model scales demonstrate that RMM consistently outperforms existing merging approaches, preserving the performance of low-rank compressed models by a significant margin.