Implicit Neural Compression of Point Clouds

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of lossless compression for unstructured, high-precision point clouds, this paper proposes NeRC³: an end-to-end compression framework based on coordinate-based implicit neural representations. NeRC³ voxelizes point clouds into sparse occupancy grids and employs a dual-network architecture to jointly model voxel occupancy (geometry) and attribute values, integrated with parameter quantization and entropy coding. Extending this approach, 4D-NeRC³ introduces a novel 4D spatiotemporal coordinate network—the first of its kind for dynamic point cloud compression—enabling unified modeling of geometry, attributes, and spatiotemporal correlations. Experimental results demonstrate that NeRC³ outperforms G-PCC (octree-based) on static point cloud compression in terms of rate-distortion performance. Moreover, 4D-NeRC³ achieves significantly higher geometric fidelity than both G-PCC and V-PCC on dynamic point clouds and establishes new state-of-the-art (SOTA) performance for joint geometry-and-attribute compression.

Technology Category

Application Category

📝 Abstract
Point clouds have gained prominence in numerous applications due to their ability to accurately depict 3D objects and scenes. However, compressing unstructured, high-precision point cloud data effectively remains a significant challenge. In this paper, we propose NeRC$^{ extbf{3}}$, a novel point cloud compression framework leveraging implicit neural representations to handle both geometry and attributes. Our approach employs two coordinate-based neural networks to implicitly represent a voxelized point cloud: the first determines the occupancy status of a voxel, while the second predicts the attributes of occupied voxels. By feeding voxel coordinates into these networks, the receiver can efficiently reconstructs the original point cloud's geometry and attributes. The neural network parameters are quantized and compressed alongside auxiliary information required for reconstruction. Additionally, we extend our method to dynamic point cloud compression with techniques to reduce temporal redundancy, including a 4D spatial-temporal representation termed 4D-NeRC$^{ extbf{3}}$. Experimental results validate the effectiveness of our approach: for static point clouds, NeRC$^{ extbf{3}}$ outperforms octree-based methods in the latest G-PCC standard. For dynamic point clouds, 4D-NeRC$^{ extbf{3}}$ demonstrates superior geometry compression compared to state-of-the-art G-PCC and V-PCC standards and achieves competitive results for joint geometry and attribute compression.
Problem

Research questions and friction points this paper is trying to address.

Compressing unstructured high-precision point cloud data efficiently
Leveraging implicit neural representations for geometry and attribute encoding
Extending compression to dynamic point clouds by reducing temporal redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit neural networks compress point cloud geometry
Separate networks encode voxel occupancy and attributes
4D spatio-temporal representation reduces temporal redundancy
🔎 Similar Papers
No similar papers found.
Hongning Ruan
Hongning Ruan
Zhejiang University
Yulin Shao
Yulin Shao
University of Hong Kong
Coding and ModulationMachine LearningStochastic Control
Qianqian Yang
Qianqian Yang
Zhejiang University
Information TheoryWireless AISemantic CommunicationMachine Learning
L
Liang Zhao
Department of Information Science and Electronic Engineering, Zhejiang University
Z
Zhaoyang Zhang
Department of Information Science and Electronic Engineering, Zhejiang University
D
D. Niyato
School of Computer Science and Engineering, Nanyang Technological University, Singapore