🤖 AI Summary
To address the challenge of lossless compression for unstructured, high-precision point clouds, this paper proposes NeRC³: an end-to-end compression framework based on coordinate-based implicit neural representations. NeRC³ voxelizes point clouds into sparse occupancy grids and employs a dual-network architecture to jointly model voxel occupancy (geometry) and attribute values, integrated with parameter quantization and entropy coding. Extending this approach, 4D-NeRC³ introduces a novel 4D spatiotemporal coordinate network—the first of its kind for dynamic point cloud compression—enabling unified modeling of geometry, attributes, and spatiotemporal correlations. Experimental results demonstrate that NeRC³ outperforms G-PCC (octree-based) on static point cloud compression in terms of rate-distortion performance. Moreover, 4D-NeRC³ achieves significantly higher geometric fidelity than both G-PCC and V-PCC on dynamic point clouds and establishes new state-of-the-art (SOTA) performance for joint geometry-and-attribute compression.
📝 Abstract
Point clouds have gained prominence in numerous applications due to their ability to accurately depict 3D objects and scenes. However, compressing unstructured, high-precision point cloud data effectively remains a significant challenge. In this paper, we propose NeRC$^{ extbf{3}}$, a novel point cloud compression framework leveraging implicit neural representations to handle both geometry and attributes. Our approach employs two coordinate-based neural networks to implicitly represent a voxelized point cloud: the first determines the occupancy status of a voxel, while the second predicts the attributes of occupied voxels. By feeding voxel coordinates into these networks, the receiver can efficiently reconstructs the original point cloud's geometry and attributes. The neural network parameters are quantized and compressed alongside auxiliary information required for reconstruction. Additionally, we extend our method to dynamic point cloud compression with techniques to reduce temporal redundancy, including a 4D spatial-temporal representation termed 4D-NeRC$^{ extbf{3}}$. Experimental results validate the effectiveness of our approach: for static point clouds, NeRC$^{ extbf{3}}$ outperforms octree-based methods in the latest G-PCC standard. For dynamic point clouds, 4D-NeRC$^{ extbf{3}}$ demonstrates superior geometry compression compared to state-of-the-art G-PCC and V-PCC standards and achieves competitive results for joint geometry and attribute compression.