NatADiff: Adversarial Boundary Guidance for Natural Adversarial Diffusion

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial example research predominantly focuses on constrained perturbations, failing to reflect real-world failure modes and lacking principled modeling of naturally occurring adversarial examples. Method: We propose a novel natural adversarial example generation framework based on Denoising Diffusion Probabilistic Models (DDPMs), the first to integrate time-travel sampling with enhanced classifier guidance to steer denoising trajectories precisely toward the intersection manifold between true-class and adversarial-class data distributions. Contribution/Results: By incorporating adversarial-class conditional guidance and explicit manifold-intersection constraints, our method significantly improves semantic fidelity and cross-architecture transferability. Experiments demonstrate state-of-the-art attack success rates, lower Fréchet Inception Distance (FID) scores—indicating closer alignment with real-world misclassification distributions—and effectively bridge the gap between synthetically generated adversarial examples and empirically observed failure patterns in practical deployments.

Technology Category

Application Category

📝 Abstract
Adversarial samples exploit irregularities in the manifold ``learned'' by deep learning models to cause misclassifications. The study of these adversarial samples provides insight into the features a model uses to classify inputs, which can be leveraged to improve robustness against future attacks. However, much of the existing literature focuses on constrained adversarial samples, which do not accurately reflect test-time errors encountered in real-world settings. To address this, we propose `NatADiff', an adversarial sampling scheme that leverages denoising diffusion to generate natural adversarial samples. Our approach is based on the observation that natural adversarial samples frequently contain structural elements from the adversarial class. Deep learning models can exploit these structural elements to shortcut the classification process, rather than learning to genuinely distinguish between classes. To leverage this behavior, we guide the diffusion trajectory towards the intersection of the true and adversarial classes, combining time-travel sampling with augmented classifier guidance to enhance attack transferability while preserving image fidelity. Our method achieves comparable attack success rates to current state-of-the-art techniques, while exhibiting significantly higher transferability across model architectures and better alignment with natural test-time errors as measured by FID. These results demonstrate that NatADiff produces adversarial samples that not only transfer more effectively across models, but more faithfully resemble naturally occurring test-time errors.
Problem

Research questions and friction points this paper is trying to address.

Generates natural adversarial samples using denoising diffusion
Guides diffusion to exploit structural class overlap for misclassification
Improves transferability and fidelity of adversarial attacks across models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages denoising diffusion for natural adversarial samples
Guides diffusion trajectory to class intersection
Combines time-travel and classifier guidance
🔎 Similar Papers
No similar papers found.
M
Max Collins
School of Physics, Maths and Computing, The University of Western Australia, Perth, WA 6009
Jordan Vice
Jordan Vice
Ph.D.
artificial intelligencemachine learningexplainable AIaffective computing
T
Tim French
School of Physics, Maths and Computing, The University of Western Australia, Perth, WA 6009
A
Ajmal Mian
School of Physics, Maths and Computing, The University of Western Australia, Perth, WA 6009