🤖 AI Summary
To address catastrophic forgetting and poor adaptation to new tasks—caused by severe parameter interference and weak test-distribution adaptability in data-free continual model fusion—we propose a test-time online dynamic fusion framework. Our approach introduces three key innovations: (1) a spatially constrained gating mechanism that mitigates interference from old-task parameters via spatial orthogonality; (2) an adaptive relaxation strategy that dynamically tunes constraint strength using a small set of unlabeled test samples; and (3) a low-rank mixture-of-experts architecture integrated with parameter-efficient fine-tuning, jointly optimizing forgetting suppression and robustness to distribution shifts. Evaluated on standard continual fusion benchmarks, our method achieves an average performance gain of 7–9% over state-of-the-art approaches, significantly alleviating forgetting while enhancing cross-task generalization robustness.
📝 Abstract
Continual model merging integrates independently fine-tuned models sequentially without access to original training data, providing a scalable and efficient solution to continual learning. However, current methods still face critical challenges, notably parameter interference among tasks and limited adaptability to evolving test distributions. The former causes catastrophic forgetting of integrated tasks, while the latter hinders effective adaptation to new tasks. To address these, we propose MINGLE, a novel framework for test-time continual model merging, which leverages test-time adaptation using a small set of unlabeled test samples from the current task to dynamically guide the merging process. MINGLE employs a mixture-of-experts architecture composed of parameter-efficient, low-rank experts, enabling efficient adaptation and improving robustness to distribution shifts. To mitigate catastrophic forgetting, we propose Null-Space Constrained Gating, which restricts gating updates to subspaces orthogonal to prior task representations. This suppresses activations on old task inputs and preserves model behavior on past tasks. To further balance stability and adaptability, we design an Adaptive Relaxation Strategy, which dynamically adjusts the constraint strength based on interference signals captured during test-time adaptation. Extensive experiments on standard continual merging benchmarks demonstrate that MINGLE achieves robust generalization, reduces forgetting significantly, and consistently surpasses previous state-of-the-art methods by 7-9% on average across diverse task orders.