When Safe Models Merge into Danger: Exploiting Latent Vulnerabilities in LLM Fusion

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While model merging can enhance the performance of large language models (LLMs), it may inadvertently introduce novel security risks that compromise alignment in the merged model. This work proposes TrojanMerge, a framework that uncovers a previously overlooked attack surface in the merging process: by embedding latent perturbations into source models under a directional consistency constraint, an adversary can induce highly harmful outputs in the merged model while preserving the individual safety and capabilities of each source model. The attack is formulated as a constrained optimization problem, leveraging Frobenius-norm-based directional alignment to precompute effective perturbation vectors. Experiments across nine LLMs from three model families demonstrate that TrojanMerge significantly increases harmful response rates in merged models without degrading source model safety or performance, and remains effective across diverse merging algorithms and hyperparameter configurations.
📝 Abstract
Model merging has emerged as a powerful technique for combining specialized capabilities from multiple fine-tuned LLMs without additional training costs. However, the security implications of this widely-adopted practice remain critically underexplored. In this work, we reveal that model merging introduces a novel attack surface that can be systematically exploited to compromise safety alignment. We present TrojanMerge,, a framework that embeds latent malicious components into source models that remain individually benign but produce severely misaligned models when merged. Our key insight is formulating this attack as a constrained optimization problem: we construct perturbations that preserve source model safety through directional consistency constraints, maintain capabilities via Frobenius directional alignment constraints, yet combine during merging to form pre-computed attack vectors. Extensive experiments across 9 LLMs from 3 model families demonstrate that TrojanMerge, consistently achieves high harmful response rates in merged models while source models maintain safety scores comparable to unmodified versions. Our attack succeeds across diverse merging algorithms and remains effective under various hyperparameter configurations. These findings expose fundamental vulnerabilities in current model merging practices and highlight the urgent need for security-aware mechanisms.
Problem

Research questions and friction points this paper is trying to address.

model merging
safety alignment
latent vulnerabilities
large language models
security
Innovation

Methods, ideas, or system contributions that make the work stand out.

model merging
safety alignment
latent vulnerability
constrained optimization
TrojanMerge
🔎 Similar Papers
No similar papers found.
J
Jiaqing Li
Huazhong University of Science and Technology, Wuhan, 430074, China
Z
Zhibo Zhang
Huazhong University of Science and Technology, Wuhan, 430074, China
S
Shide Zhou
Huazhong University of Science and Technology, Wuhan, 430074, China
Yuxi Li
Yuxi Li
Unknown affiliation
machine learningcomputer vision
Tianlong Yu
Tianlong Yu
CMU
K
Kailong Wang
Huazhong University of Science and Technology, Wuhan, 430074, China