AegisRF: Adversarial Perturbations Guided with Sensitivity for Protecting Intellectual Property of Neural Radiance Fields

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the vulnerability of Neural Radiance Fields (NeRFs) to intellectual property theft and the geometric distortions and rendering quality degradation caused by existing watermarking methods, this paper proposes a sensitivity-guided adversarial watermarking framework. The method introduces a learnable sensitivity field that adaptively constrains perturbations’ geometric impact in the pre-rendered output space, and jointly optimizes a perturbation field with the sensitivity field to achieve covert, high-fidelity, and multi-task-robust copyright protection. Experiments demonstrate that the proposed approach significantly degrades unauthorized downstream task performance—reducing accuracy by over 40% on average for multi-view image classification and voxel-level 3D localization—while preserving novel-view synthesis quality with PSNR >30 dB. To our knowledge, this is the first method enabling lossless, implicit watermarking for high-quality NeRF pre-rendered outputs.

Technology Category

Application Category

📝 Abstract
As Neural Radiance Fields (NeRFs) have emerged as a powerful tool for 3D scene representation and novel view synthesis, protecting their intellectual property (IP) from unauthorized use is becoming increasingly crucial. In this work, we aim to protect the IP of NeRFs by injecting adversarial perturbations that disrupt their unauthorized applications. However, perturbing the 3D geometry of NeRFs can easily deform the underlying scene structure and thus substantially degrade the rendering quality, which has led existing attempts to avoid geometric perturbations or restrict them to explicit spaces like meshes. To overcome this limitation, we introduce a learnable sensitivity to quantify the spatially varying impact of geometric perturbations on rendering quality. Building upon this, we propose AegisRF, a novel framework that consists of a Perturbation Field, which injects adversarial perturbations into the pre-rendering outputs (color and volume density) of NeRF models to fool an unauthorized downstream target model, and a Sensitivity Field, which learns the sensitivity to adaptively constrain geometric perturbations, preserving rendering quality while disrupting unauthorized use. Our experimental evaluations demonstrate the generalized applicability of AegisRF across diverse downstream tasks and modalities, including multi-view image classification and voxel-based 3D localization, while maintaining high visual fidelity. Codes are available at https://github.com/wkim97/AegisRF.
Problem

Research questions and friction points this paper is trying to address.

Protecting Neural Radiance Fields intellectual property from unauthorized use
Injecting adversarial perturbations without degrading rendering quality
Quantifying spatial impact of geometric perturbations on visual fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial perturbations injected into pre-rendering outputs
Learnable sensitivity field constrains geometric perturbations adaptively
Perturbation field and sensitivity field preserve visual fidelity
🔎 Similar Papers
No similar papers found.
W
Woo Jae Kim
School of Computing, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea
K
Kyu Beom Han
School of Computing, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea
Y
Yoonki Cho
School of Computing, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea
Y
Youngju Na
School of Computing, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea
J
Junsik Jung
School of Computing, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea
Sooel Son
Sooel Son
KAIST
Web SecurityPrivacyProgram analysis
S
Sung-eui Yoon
School of Computing, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea