๐ค AI Summary
To address intellectual property leakage risks arising from model merging, this paper proposes the first proactive, parameter-level defense mechanism. Methodologically, it embeds protection via MLP parameter reordering, attention head scaling, and basin perturbation in parameter spaceโwithout requiring additional training. The defense induces significant functional degradation in maliciously merged models while preserving near-original task performance (accuracy drop <0.3%). Our key contributions are: (i) the first proactive, training-free, parameter-level defense against model merging; and (ii) empirically validated robustness across image classification, image generation, and text classification tasks, where merged models suffer >40% average performance degradation under the defense and maintain resilience against diverse adaptive attacks.
๐ Abstract
Model merging is a technique that combines multiple finetuned models into a single model without additional training, allowing a free-rider to cheaply inherit specialized capabilities. This study investigates methodologies to suppress unwanted model merging by free-riders. Existing methods such as model watermarking or fingerprinting can only detect merging in hindsight. In contrast, we propose a first proactive defense against model merging. Specifically, our defense method modifies the model parameters so that the model is disrupted if the model is merged with any other model, while its functionality is kept unchanged if not merged with others. Our approach consists of two modules, rearranging MLP parameters and scaling attention heads, which push the model out of the shared basin in parameter space, causing the merging performance with other models to degrade significantly. We conduct extensive experiments on image classification, image generation, and text classification to demonstrate that our defense severely disrupts merging while retaining the functionality of the post-protect model. Moreover, we analyze potential adaptive attacks and further propose a dropout-based pruning to improve our proposal's robustness.