BadRSSD: Backdoor Attacks on Regularized Self-Supervised Diffusion Models

πŸ“… 2026-03-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of self-supervised diffusion models to backdoor attacks at the representation layer and proposes the first attack method that hijacks trigger sample representations toward a target image in the PCA semantic space. By imposing coordinated constraints in the latent space, pixel space, and feature distribution space, and introducing a representation dispersion regularization to enhance stealthiness, the approach achieves high-precision triggering while preserving the model’s normal functionality and high usability. Extensive experiments demonstrate that the proposed attack significantly outperforms existing methods across multiple benchmark datasets, achieving superior FID and MSE metrics. Moreover, it reliably implants backdoors across diverse model architectures and effectively evades state-of-the-art defense mechanisms.

Technology Category

Application Category

πŸ“ Abstract
Self-supervised diffusion models learn high-quality visual representations via latent space denoising. However, their representation layer poses a distinct threat: unlike traditional attacks targeting generative outputs, its unconstrained latent semantic space allows for stealthy backdoors, permitting malicious control upon triggering. In this paper, we propose BadRSSD, the first backdoor attack targeting the representation layer of self-supervised diffusion models. Specifically, it hijacks the semantic representations of poisoned samples with triggers in Principal Component Analysis (PCA) space toward those of a target image, then controls the denoising trajectory during diffusion by applying coordinated constraints across latent, pixel, and feature distribution spaces to steer the model toward generating the specified target. Additionally, we integrate representation dispersion regularization into the constraint framework to maintain feature space uniformity, significantly enhancing attack stealth. This approach preserves normal model functionality (high utility) while achieving precise target generation upon trigger activation (high specificity). Experiments on multiple benchmark datasets demonstrate that BadRSSD substantially outperforms existing attacks in both FID and MSE metrics, reliably establishing backdoors across different architectures and configurations, and effectively resisting state-of-the-art backdoor defenses.
Problem

Research questions and friction points this paper is trying to address.

backdoor attacks
self-supervised diffusion models
representation layer
latent semantic space
stealthy backdoors
Innovation

Methods, ideas, or system contributions that make the work stand out.

backdoor attack
self-supervised diffusion models
representation layer
PCA space hijacking
representation dispersion regularization
πŸ”Ž Similar Papers
No similar papers found.
J
Jiayao Wang
School of Information and Artificial Intelligence, Yangzhou University, China
Y
Yiping Zhang
School of Information and Artificial Intelligence, Yangzhou University, China
M
Mohammad Maruf Hasan
School of Information and Artificial Intelligence, Yangzhou University, China
Xiaoying Lei
Xiaoying Lei
Lecturer, Yangzhou university
wireless communicationsvehicular networksmachine learningfederated learning
Jiale Zhang
Jiale Zhang
Yangzhou University
AI security and privacyFederated learningBlockchain
J
Junwu Zhu
School of Information and Artificial Intelligence, Yangzhou University, China
Q
Qilin Wu
School of Computing and Artificial Intelligence, Chaohu University, China
Dongfang Zhao
Dongfang Zhao
Assistant Professor, University of Washington
DatabasesAIHPCCryptographyArithmetic Geometry