Towards Minimizing Feature Drift in Model Merging: Layer-wise Task Vector Fusion for Adaptive Knowledge Integration

πŸ“… 2025-05-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multi-task model merging often suffers significant performance degradation due to feature driftβ€”the inconsistency in representations of the same input between expert models and the merged unified model. This work is the first to explicitly identify feature drift as the primary cause of performance deterioration. We propose Layerwise Optimal Task vector Merging (LOT Merging), which explicitly minimizes feature discrepancies between expert and unified models layer-by-layer. Our approach formulates a convex quadratic optimization problem with analytically tractable closed-form solutions for both linear and normalization layer parameters. LOT Merging requires no additional training and relies solely on efficient matrix operations. Extensive experiments on vision and vision-language benchmarks demonstrate consistent superiority over state-of-the-art methods, achieving up to a 4.4% absolute improvement on ViT-B/32.

Technology Category

Application Category

πŸ“ Abstract
Multi-task model merging aims to consolidate knowledge from multiple fine-tuned task-specific experts into a unified model while minimizing performance degradation. Existing methods primarily approach this by minimizing differences between task-specific experts and the unified model, either from a parameter-level or a task-loss perspective. However, parameter-level methods exhibit a significant performance gap compared to the upper bound, while task-loss approaches entail costly secondary training procedures. In contrast, we observe that performance degradation closely correlates with feature drift, i.e., differences in feature representations of the same sample caused by model merging. Motivated by this observation, we propose Layer-wise Optimal Task Vector Merging (LOT Merging), a technique that explicitly minimizes feature drift between task-specific experts and the unified model in a layer-by-layer manner. LOT Merging can be formulated as a convex quadratic optimization problem, enabling us to analytically derive closed-form solutions for the parameters of linear and normalization layers. Consequently, LOT Merging achieves efficient model consolidation through basic matrix operations. Extensive experiments across vision and vision-language benchmarks demonstrate that LOT Merging significantly outperforms baseline methods, achieving improvements of up to 4.4% (ViT-B/32) over state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Minimizing feature drift in multi-task model merging
Efficiently consolidating knowledge without secondary training
Improving performance over existing parameter-level methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise Optimal Task Vector Merging (LOT Merging)
Minimizes feature drift in model merging
Closed-form solutions for efficient consolidation
πŸ”Ž Similar Papers
No similar papers found.
W
Wenju Sun
Key Laboratory of Big Data & Artificial Intelligence in Transportation, Beijing Jiaotong University
Q
Qingyong Li
Key Laboratory of Big Data & Artificial Intelligence in Transportation, Beijing Jiaotong University
W
Wen Wang
Key Laboratory of Big Data & Artificial Intelligence in Transportation, Beijing Jiaotong University
Y
Yang Liu
Key Laboratory of Big Data & Artificial Intelligence in Transportation, Beijing Jiaotong University
Yangli-ao Geng
Yangli-ao Geng
Beijing Jiaotong University
Machine LearningData MiningUnsupervised Learning
B
Boyang Li
College of Computing and Data Science, Nanyang Technological University