Implicit Neural Representation for Video Restoration

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video restoration methods suffer from limited generalizability due to reliance on fixed scaling factors and predefined noise priors, rendering them ineffective for unseen super-resolution (SR) scales and unknown degradations. To address this, we propose the first INR-based video restoration framework, introducing hierarchical spatiotemporal texture encoding and multi-resolution implicit hashing encoding. Crucially, our method requires training solely at a single SR scale (×4) yet supports arbitrary SR factors and zero-shot denoising without retraining. By eliminating explicit degradation modeling, it enables adaptive high-resolution frame reconstruction directly from implicit neural representations. Extensive experiments demonstrate substantial improvements over state-of-the-art methods under unseen scales and noise conditions: higher PSNR and SSIM, sharper textures, richer fine details, and more robust noise suppression.

Technology Category

Application Category

📝 Abstract
High-resolution (HR) videos play a crucial role in many computer vision applications. Although existing video restoration (VR) methods can significantly enhance video quality by exploiting temporal information across video frames, they are typically trained for fixed upscaling factors and lack the flexibility to handle scales or degradations beyond their training distribution. In this paper, we introduce VR-INR, a novel video restoration approach based on Implicit Neural Representations (INRs) that is trained only on a single upscaling factor ($ imes 4$) but generalizes effectively to arbitrary, unseen super-resolution scales at test time. Notably, VR-INR also performs zero-shot denoising on noisy input, despite never having seen noisy data during training. Our method employs a hierarchical spatial-temporal-texture encoding framework coupled with multi-resolution implicit hash encoding, enabling adaptive decoding of high-resolution and noise-suppressed frames from low-resolution inputs at any desired magnification. Experimental results show that VR-INR consistently maintains high-quality reconstructions at unseen scales and noise during training, significantly outperforming state-of-the-art approaches in sharpness, detail preservation, and denoising efficacy.
Problem

Research questions and friction points this paper is trying to address.

Generalizes video restoration to unseen super-resolution scales
Performs zero-shot denoising without noisy training data
Enables adaptive high-resolution decoding from low-resolution inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Implicit Neural Representations (INRs)
Hierarchical spatial-temporal-texture encoding framework
Multi-resolution implicit hash encoding
🔎 Similar Papers
No similar papers found.
M
Mary Aiyetigbo
Clemson University, School of Computing
W
Wanqi Yuan
Clemson University, School of Computing
F
Feng Luo
Clemson University, School of Computing
Nianyi Li
Nianyi Li
Assistant Professor, Clemson University
Computer VisionComputational PhotographyMachine Learning