🤖 AI Summary
This work addresses three key bottlenecks in embodied visual navigation under physical adversarial attacks: poor transferability of digital perturbations, failure of multi-view attacks, and visual unnaturalness. We propose the first learnable physical patch method designed for real-world deployment. Our approach attaches adversarial patches—jointly optimizing texture and transparency—onto scene objects, and introduces an object-aware, multi-view differentiable rendering framework with a two-stage transparency fine-tuning mechanism to jointly optimize attack efficacy and human visual naturalness. Evaluated on standard benchmarks, our method reduces navigation success rate by 22.39% on average—significantly outperforming prior works—and achieves breakthroughs in physical feasibility, attack strength, and visual stealth.
📝 Abstract
The significant advancements in embodied vision navigation have raised concerns about its susceptibility to adversarial attacks exploiting deep neural networks. Investigating the adversarial robustness of embodied vision navigation is crucial, especially given the threat of 3D physical attacks that could pose risks to human safety. However, existing attack methods for embodied vision navigation often lack physical feasibility due to challenges in transferring digital perturbations into the physical world. Moreover, current physical attacks for object detection struggle to achieve both multi-view effectiveness and visual naturalness in navigation scenarios. To address this, we propose a practical attack method for embodied navigation by attaching adversarial patches to objects, where both opacity and textures are learnable. Specifically, to ensure effectiveness across varying viewpoints, we employ a multi-view optimization strategy based on object-aware sampling, which optimizes the patch's texture based on feedback from the vision-based perception model used in navigation. To make the patch inconspicuous to human observers, we introduce a two-stage opacity optimization mechanism, in which opacity is fine-tuned after texture optimization. Experimental results demonstrate that our adversarial patches decrease the navigation success rate by an average of 22.39%, outperforming previous methods in practicality, effectiveness, and naturalness. Code is available at: https://github.com/chen37058/Physical-Attacks-in-Embodied-Nav