Towards Lossless Implicit Neural Representation via Bit Plane Decomposition

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Neural Representations (INRs) suffer from exponential growth in model capacity with bit precision, hindering high-fidelity lossless modeling. Method: We propose a bit-plane decomposition framework that decomposes signals into individual bit planes and trains the network to directly predict each bit, incorporating bit-level supervision. Crucially, we identify and exploit the bit-bias phenomenon in INRs—where most-significant bits (MSBs) are fitted preferentially—thereby fundamentally lowering the theoretical capacity bound via digital representation principles. Results: Our method achieves truly lossless reconstruction for 2D images and audio, enabling—for the first time—the lossless INR representation of 16-bit high-precision signals. It maintains constant parameter count while significantly accelerating convergence. The framework further extends to lossless image compression, bit-depth extension, and ultra-low-bit neural network quantization, establishing a novel paradigm for efficient, high-precision INR modeling.

Technology Category

Application Category

📝 Abstract
We quantify the upper bound on the size of the implicit neural representation (INR) model from a digital perspective. The upper bound of the model size increases exponentially as the required bit-precision increases. To this end, we present a bit-plane decomposition method that makes INR predict bit-planes, producing the same effect as reducing the upper bound of the model size. We validate our hypothesis that reducing the upper bound leads to faster convergence with constant model size. Our method achieves lossless representation in 2D image and audio fitting, even for high bit-depth signals, such as 16-bit, which was previously unachievable. We pioneered the presence of bit bias, which INR prioritizes as the most significant bit (MSB). We expand the application of the INR task to bit depth expansion, lossless image compression, and extreme network quantization. Our source code is available at https://github.com/WooKyoungHan/LosslessINR
Problem

Research questions and friction points this paper is trying to address.

Quantify upper bound of implicit neural representation size
Propose bit-plane decomposition for lossless representation
Enable lossless 2D image and audio fitting for high bit-depth
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bit-plane decomposition reduces model size upper bound.
Achieves lossless representation for high bit-depth signals.
Expands INR applications to compression and quantization.
🔎 Similar Papers
No similar papers found.