🤖 AI Summary
To address the vulnerability of Neural Radiance Fields (NeRFs) to intellectual property theft and the geometric distortions and rendering quality degradation caused by existing watermarking methods, this paper proposes a sensitivity-guided adversarial watermarking framework. The method introduces a learnable sensitivity field that adaptively constrains perturbations’ geometric impact in the pre-rendered output space, and jointly optimizes a perturbation field with the sensitivity field to achieve covert, high-fidelity, and multi-task-robust copyright protection. Experiments demonstrate that the proposed approach significantly degrades unauthorized downstream task performance—reducing accuracy by over 40% on average for multi-view image classification and voxel-level 3D localization—while preserving novel-view synthesis quality with PSNR >30 dB. To our knowledge, this is the first method enabling lossless, implicit watermarking for high-quality NeRF pre-rendered outputs.
📝 Abstract
As Neural Radiance Fields (NeRFs) have emerged as a powerful tool for 3D scene representation and novel view synthesis, protecting their intellectual property (IP) from unauthorized use is becoming increasingly crucial. In this work, we aim to protect the IP of NeRFs by injecting adversarial perturbations that disrupt their unauthorized applications. However, perturbing the 3D geometry of NeRFs can easily deform the underlying scene structure and thus substantially degrade the rendering quality, which has led existing attempts to avoid geometric perturbations or restrict them to explicit spaces like meshes. To overcome this limitation, we introduce a learnable sensitivity to quantify the spatially varying impact of geometric perturbations on rendering quality. Building upon this, we propose AegisRF, a novel framework that consists of a Perturbation Field, which injects adversarial perturbations into the pre-rendering outputs (color and volume density) of NeRF models to fool an unauthorized downstream target model, and a Sensitivity Field, which learns the sensitivity to adaptively constrain geometric perturbations, preserving rendering quality while disrupting unauthorized use. Our experimental evaluations demonstrate the generalized applicability of AegisRF across diverse downstream tasks and modalities, including multi-view image classification and voxel-based 3D localization, while maintaining high visual fidelity. Codes are available at https://github.com/wkim97/AegisRF.