F-INR: Functional Tensor Decomposition for Implicit Neural Representations

πŸ“… 2025-03-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
High-dimensional implicit neural representations (INRs) suffer from prohibitive computational overhead due to monolithic network architectures, where cost scales exponentially with dimensionality. To address this, we propose F-INRβ€”a functional tensor decomposition framework that decouples high-dimensional mappings into lightweight, axis-specific subnetworks and composes them via functional tensor decompositions (CP, TT, or Tucker). This constitutes the first reformulation of INR learning from a *function decomposition* perspective. F-INR is modular, architecture-agnostic, and decomposition-agnostic, enabling joint control over reconstruction fidelity and inference speed. Experiments demonstrate a 100Γ— training acceleration for video modeling with a 3.4 dB PSNR gain; significant improvements in compression efficiency, physical simulation accuracy, and 3D geometry reconstruction fidelity are also achieved across diverse benchmarks.

Technology Category

Application Category

πŸ“ Abstract
Implicit Neural Representation (INR) has emerged as a powerful tool for encoding discrete signals into continuous, differentiable functions using neural networks. However, these models often have an unfortunate reliance on monolithic architectures to represent high-dimensional data, leading to prohibitive computational costs as dimensionality grows. We propose F-INR, a framework that reformulates INR learning through functional tensor decomposition, breaking down high-dimensional tasks into lightweight, axis-specific sub-networks. Each sub-network learns a low-dimensional data component (e.g., spatial or temporal). Then, we combine these components via tensor operations, reducing forward pass complexity while improving accuracy through specialized learning. F-INR is modular and, therefore, architecture-agnostic, compatible with MLPs, SIREN, WIRE, or other state-of-the-art INR architecture. It is also decomposition-agnostic, supporting CP, TT, and Tucker modes with user-defined rank for speed-accuracy control. In our experiments, F-INR trains $100 imes$ faster than existing approaches on video tasks while achieving higher fidelity (+3.4 dB PSNR). Similar gains hold for image compression, physics simulations, and 3D geometry reconstruction. Through this, F-INR offers a new scalable, flexible solution for high-dimensional signal modeling.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational cost in high-dimensional INR models
Decomposes high-dimensional tasks into lightweight sub-networks
Improves accuracy and speed in signal modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Functional tensor decomposition for high-dimensional tasks
Lightweight axis-specific sub-networks for specialized learning
Modular architecture-agnostic framework with tensor operations
πŸ”Ž Similar Papers
No similar papers found.