Enhancing Robustness of Implicit Neural Representations Against Weight Perturbations

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Neural Representations (INRs) suffer significant degradation in reconstruction quality under weight perturbations, revealing poor robustness. This work presents the first systematic investigation into the sensitivity mechanisms of INRs to weight perturbations and proposes a gradient-regularized robust training framework. Specifically, it minimizes the discrepancy between reconstruction losses before and after perturbation while explicitly constraining the magnitude of the loss gradient with respect to network weights—thereby enhancing model stability without increasing inference overhead or sacrificing architectural compatibility with mainstream INR designs. Evaluated across diverse modalities—including image, video, and 3D scene reconstruction—the method achieves up to a 7.5 dB PSNR improvement under noise corruption, substantially outperforming standard INRs. This establishes a novel paradigm for robust implicit modeling.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) encode discrete signals in a continuous manner using neural networks, demonstrating significant value across various multimedia applications. However, the vulnerability of INRs presents a critical challenge for their real-world deployments, as the network weights might be subjected to unavoidable perturbations. In this work, we investigate the robustness of INRs for the first time and find that even minor perturbations can lead to substantial performance degradation in the quality of signal reconstruction. To mitigate this issue, we formulate the robustness problem in INRs by minimizing the difference between loss with and without weight perturbations. Furthermore, we derive a novel robust loss function to regulate the gradient of the reconstruction loss with respect to weights, thereby enhancing the robustness. Extensive experiments on reconstruction tasks across multiple modalities demonstrate that our method achieves up to a 7.5~dB improvement in peak signal-to-noise ratio (PSNR) values compared to original INRs under noisy conditions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness of Implicit Neural Representations against weight perturbations
Addressing vulnerability of INRs to minor weight changes causing performance degradation
Mitigating substantial signal reconstruction quality loss under noisy conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Minimizing loss difference with perturbations
Novel robust loss function regulating gradients
Enhancing robustness against weight perturbations
🔎 Similar Papers
No similar papers found.
Wenyong Zhou
Wenyong Zhou
The University of Hong Kong
Computer Vision
Y
Yuxin Cheng
Department of EEE, The University of Hong Kong, Hong Kong SAR
Zhengwu Liu
Zhengwu Liu
The University of Hong Kong (HKU) / Tsinghua University (THU)
brain machine interfacescomputing in memorymemristor
Taiqiang Wu
Taiqiang Wu
University of Hong Kong | Tsinghua University
Model CompressionEfficient Methods
C
Chen Zhang
Department of EEE, The University of Hong Kong, Hong Kong SAR
N
Ngai Wong
Department of EEE, The University of Hong Kong, Hong Kong SAR