π€ AI Summary
Existing multi-task model merging methods achieve strong in-distribution (ID) performance but suffer from severe out-of-distribution (OOD) generalization deficits. To address this, we propose Layer-wise Pruning of Task Vectors (LwPTV), the first model merging framework to introduce layer-granularity interpretable pruning. LwPTV models task-specific parameter deviations as task vectors, computes layer-wise significance scores to identify redundant parameters, and dynamically applies sparse masks while preserving the pre-trained backbone weights. Crucially, it maintains ID accuracy with zero degradation while substantially enhancing OOD robustness. Moreover, LwPTV is modular and compatible with 12 state-of-the-art merging methods in a plug-and-play manner. Evaluated across multiple OOD benchmarks, LwPTV achieves an average accuracy improvement of 8.2% over baseline merging approaches.
π Abstract
Multi-task learning (MTL) concurrently trains a model on diverse task datasets to exploit common features, thereby improving overall performance across the tasks. Recent studies have dedicated efforts to merging multiple independent model parameters into a unified model for MTL, thus circumventing the need for training data and expanding the scope of applicable scenarios of MTL. However, current approaches to model merging predominantly concentrate on enhancing performance within in-domain (ID) datasets, often overlooking their efficacy on out-of-domain (OOD) datasets. In this work, we proposed LwPTV (Layer-wise Pruning Task Vector) by building a saliency score, measuring the redundancy of parameters in task vectors. Designed in this way ours can achieve mask vector for each task and thus perform layer-wise pruning on the task vectors, only keeping the pre-trained model parameters at the corresponding layer in merged model. Owing to its flexibility, our method can be seamlessly integrated with most of existing model merging methods to improve their performance on OOD tasks. Extensive experiments demonstrate that the application of our method results in substantial enhancements in OOD performance while preserving the ability on ID tasks.