Disrupting Model Merging: A Parameter-Level Defense Without Sacrificing Accuracy

๐Ÿ“… 2025-03-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address intellectual property leakage risks arising from model merging, this paper proposes the first proactive, parameter-level defense mechanism. Methodologically, it embeds protection via MLP parameter reordering, attention head scaling, and basin perturbation in parameter spaceโ€”without requiring additional training. The defense induces significant functional degradation in maliciously merged models while preserving near-original task performance (accuracy drop <0.3%). Our key contributions are: (i) the first proactive, training-free, parameter-level defense against model merging; and (ii) empirically validated robustness across image classification, image generation, and text classification tasks, where merged models suffer >40% average performance degradation under the defense and maintain resilience against diverse adaptive attacks.

Technology Category

Application Category

๐Ÿ“ Abstract
Model merging is a technique that combines multiple finetuned models into a single model without additional training, allowing a free-rider to cheaply inherit specialized capabilities. This study investigates methodologies to suppress unwanted model merging by free-riders. Existing methods such as model watermarking or fingerprinting can only detect merging in hindsight. In contrast, we propose a first proactive defense against model merging. Specifically, our defense method modifies the model parameters so that the model is disrupted if the model is merged with any other model, while its functionality is kept unchanged if not merged with others. Our approach consists of two modules, rearranging MLP parameters and scaling attention heads, which push the model out of the shared basin in parameter space, causing the merging performance with other models to degrade significantly. We conduct extensive experiments on image classification, image generation, and text classification to demonstrate that our defense severely disrupts merging while retaining the functionality of the post-protect model. Moreover, we analyze potential adaptive attacks and further propose a dropout-based pruning to improve our proposal's robustness.
Problem

Research questions and friction points this paper is trying to address.

Prevent unauthorized model merging without accuracy loss.
Propose proactive defense by modifying model parameters.
Disrupt merging performance while maintaining original functionality.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proactive defense disrupts model merging effectively.
Rearranges MLP parameters and scales attention heads.
Dropout-based pruning enhances defense robustness.
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Junhao Wei
RIKEN AIP; Institute of Science Tokyo
Yu Zhe
Yu Zhe
RIKEN AIP
Adversarial Machine Learning
S
Sakuma Jun
RIKEN AIP; Institute of Science Tokyo