π€ AI Summary
This work addresses the limitations of existing model fusion methods for multi-task fine-tuned Vision Transformers, which often overlook layer-wise heterogeneity, resulting in severe interference in shallow layers and underutilization of task-specific features in deeper layers. To mitigate this, the authors propose LARVβa training- and data-free, layer-aware adaptive rescaling mechanism that generates per-layer scaling factors via lightweight, deterministic rules. LARV effectively suppresses shallow-layer interference while enhancing deep-layer alignment, leveraging data-free layer proxy metrics and supporting either hierarchical or continuous mapping strategies. Designed as a plug-and-play rescaling overlay, LARV is orthogonal and compatible with existing fusion approaches. Extensive evaluations on FusionBench demonstrate consistent performance gains across diverse baselines, with Iso-C + LARV achieving 92.6% on ViT-L/14 and showing robustness across 8-, 14-, and 20-task settings.
π Abstract
Model merging aims to combine multiple fine-tuned models into a single multi-task model without access to training data. Existing task-vector merging methods such as TIES, TSV-M, and Iso-C/CTS differ in their aggregation rules but treat all layers nearly uniformly. This assumption overlooks the strong layer-wise heterogeneity in large vision transformers, where shallow layers are sensitive to interference while deeper layers encode stable task-specific features. We introduce LARV, a training-free, data-free, merger-agnostic Layer-wise Adaptive Rescaling Veneer that plugs into any task-vector merger and assigns a per-layer scale to each task vector before aggregation, and show it consistently boosts diverse merging rules. LARV adaptively suppresses shallow-layer interference and amplifies deeper-layer alignment using a simple deterministic schedule, requiring no retraining or modification to existing mergers. To our knowledge, this is the first work to perform layer-aware scaling for task-vector merging. LARV computes simple data-free layer proxies and turns them into scales through a lightweight rule; we study several instantiations within one framework (e.g., tiered two/three-level scaling with fixed values, or continuous mappings) and show that tiered choices offer the best robustness, while continuous mappings remain an ablation. LARV is orthogonal to the base merger and adds negligible cost. On FusionBench with Vision Transformers, LARV consistently improves all task-vector baselines across 8/14/20-task settings; for example, Iso-C + LARV reaches 85.9% on ViT-B/32, 89.2% on ViT-B/16, and 92.6% on ViT-L/14. Layerwise analysis and corruption tests further indicate that LARV suppresses shallow-layer interference while modestly amplifying deeper, task-stable features, turning model merging into a robust, layer-aware procedure rather than a uniform one.