π€ AI Summary
Multi-task model merging often suffers significant performance degradation due to feature driftβthe inconsistency in representations of the same input between expert models and the merged unified model. This work is the first to explicitly identify feature drift as the primary cause of performance deterioration. We propose Layerwise Optimal Task vector Merging (LOT Merging), which explicitly minimizes feature discrepancies between expert and unified models layer-by-layer. Our approach formulates a convex quadratic optimization problem with analytically tractable closed-form solutions for both linear and normalization layer parameters. LOT Merging requires no additional training and relies solely on efficient matrix operations. Extensive experiments on vision and vision-language benchmarks demonstrate consistent superiority over state-of-the-art methods, achieving up to a 4.4% absolute improvement on ViT-B/32.
π Abstract
Multi-task model merging aims to consolidate knowledge from multiple fine-tuned task-specific experts into a unified model while minimizing performance degradation. Existing methods primarily approach this by minimizing differences between task-specific experts and the unified model, either from a parameter-level or a task-loss perspective. However, parameter-level methods exhibit a significant performance gap compared to the upper bound, while task-loss approaches entail costly secondary training procedures. In contrast, we observe that performance degradation closely correlates with feature drift, i.e., differences in feature representations of the same sample caused by model merging. Motivated by this observation, we propose Layer-wise Optimal Task Vector Merging (LOT Merging), a technique that explicitly minimizes feature drift between task-specific experts and the unified model in a layer-by-layer manner. LOT Merging can be formulated as a convex quadratic optimization problem, enabling us to analytically derive closed-form solutions for the parameters of linear and normalization layers. Consequently, LOT Merging achieves efficient model consolidation through basic matrix operations. Extensive experiments across vision and vision-language benchmarks demonstrate that LOT Merging significantly outperforms baseline methods, achieving improvements of up to 4.4% (ViT-B/32) over state-of-the-art approaches.