RenderBender: A Survey on Adversarial Attacks Using Differentiable Rendering

📅 2024-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing physically realistic adversarial attacks enabled by differentiable rendering (e.g., Gaussian splatting, NeRF) lack systematic modeling and unified evaluation. Method: We propose the first unified classification and evaluation framework, integrating differentiable rendering, physics-based scene modeling, and gradient-based optimization to enable end-to-end generation of multi-granularity scene perturbations—including texture, geometry, and illumination—while unifying diverse attack objectives (e.g., misclassification, misdetection) across multimodal perception tasks. Contributions: (1) A standardized, cross-task and cross-modal adversarial benchmark for 3D perception; (2) The first systematic taxonomy for differentiable-rendering-based attacks, bridging gaps among attack goals, scene manipulation primitives, and real-world threat modeling; (3) Identification of three critical gaps—dynamic scene adaptability, real-time feasibility, and physical plausibility—providing concrete directions for advancing robustness research in 3D perception systems.

Technology Category

Application Category

📝 Abstract
Differentiable rendering techniques like Gaussian Splatting and Neural Radiance Fields have become powerful tools for generating high-fidelity models of 3D objects and scenes. Their ability to produce both physically plausible and differentiable models of scenes are key ingredient needed to produce physically plausible adversarial attacks on DNNs. However, the adversarial machine learning community has yet to fully explore these capabilities, partly due to differing attack goals (e.g., misclassification, misdetection) and a wide range of possible scene manipulations used to achieve them (e.g., alter texture, mesh). This survey contributes the first framework that unifies diverse goals and tasks, facilitating easy comparison of existing work, identifying research gaps, and highlighting future directions - ranging from expanding attack goals and tasks to account for new modalities, state-of-the-art models, tools, and pipelines, to underscoring the importance of studying real-world threats in complex scenes.
Problem

Research questions and friction points this paper is trying to address.

Survey explores adversarial attacks via differentiable rendering techniques
Unifies diverse attack goals and scene manipulation methods
Identifies research gaps and future directions in adversarial learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses differentiable rendering for adversarial attacks
Unifies diverse attack goals and tasks
Highlights real-world threats in complex scenes
🔎 Similar Papers
No similar papers found.
M
Matthew Hull
Georgia Institute of Technology
C
Chao Zhang
Georgia Institute of Technology
Z
Z. Kira
Georgia Institute of Technology
P
Polo Chau
Georgia Institute of Technology
H
Haoran Wang
Matthew Lau
Matthew Lau
Alec Helbling
Alec Helbling
Machine Learning PhD Student, Georgia Tech
ML InterpretabilityDiffusion ModelsVisualizationGenerative Models
Mansi Phute
Mansi Phute
Georgia Institute of Technology
adversarial machine learningexplainable AI
W
W. Lunardi
Martin Andreoni
Martin Andreoni
Technology Innovation Institute (TII)
Network SecurityIntrusion DetectionCloud ComputingSecure Autonomous Systems
W
Wenke Lee
G
Georgia Tech