BlurGuard: A Simple Approach for Robustifying Image Protection Against AI-Powered Editing

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image protection methods exhibit insufficient robustness against noise inversion attacks enabled by AI-based image editing tools (e.g., reverse denoising in Stable Diffusion). To address this, we propose BlurGuard—the first defense framework that explicitly leverages the irreversibility of adversarial noise as a core design principle. BlurGuard adaptively applies region-wise Gaussian blurring to modulate the noise’s frequency spectrum, effectively thwarting diverse noise inversion techniques, including JPEG compression. Integrated perceptual quality optimization ensures minimal visual distortion while substantially suppressing attack success rates. Extensive experiments demonstrate that BlurGuard consistently enhances protection across multiple editing scenarios: under worst-case conditions, it reduces attack success rate by up to 42.6% and improves PSNR by 3.8 dB—outperforming state-of-the-art defenses.

Technology Category

Application Category

📝 Abstract
Recent advances in text-to-image models have increased the exposure of powerful image editing techniques as a tool, raising concerns about their potential for malicious use. An emerging line of research to address such threats focuses on implanting "protective" adversarial noise into images before their public release, so future attempts to edit them using text-to-image models can be impeded. However, subsequent works have shown that these adversarial noises are often easily "reversed," e.g., with techniques as simple as JPEG compression, casting doubt on the practicality of the approach. In this paper, we argue that adversarial noise for image protection should not only be imperceptible, as has been a primary focus of prior work, but also irreversible, viz., it should be difficult to detect as noise provided that the original image is hidden. We propose a surprisingly simple method to enhance the robustness of image protection methods against noise reversal techniques. Specifically, it applies an adaptive per-region Gaussian blur on the noise to adjust the overall frequency spectrum. Through extensive experiments, we show that our method consistently improves the per-sample worst-case protection performance of existing methods against a wide range of reversal techniques on diverse image editing scenarios, while also reducing quality degradation due to noise in terms of perceptual metrics. Code is available at https://github.com/jsu-kim/BlurGuard.
Problem

Research questions and friction points this paper is trying to address.

Protecting images from malicious AI-powered editing techniques
Enhancing robustness of adversarial noise against reversal methods
Improving image protection while reducing perceptual quality degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive per-region Gaussian blur on noise
Adjusts overall frequency spectrum of protection
Enhances robustness against noise reversal techniques
🔎 Similar Papers
No similar papers found.
J
Jinsu Kim
Korea University
Y
Yunhun Nam
Korea University
Minseon Kim
Minseon Kim
Microsoft Research
AI SafetyRobustnessRepresentation learning
Sangpil Kim
Sangpil Kim
Korea University
Computer Vision
J
Jongheon Jeong
Korea University