🤖 AI Summary
Gaussian splatting representations suffer from significant redundancy, leading to high storage and transmission overhead—hindering their practical deployment in 3D immersive visual communication. To address this, we propose a compact Gaussian primitive modeling framework that jointly optimizes fidelity and compression. Our method introduces, for the first time, a spatio-temporal joint prediction paradigm to eliminate inter-primitive redundancy, coupled with rate-distortion-constrained optimization to suppress intra-primitive parameter redundancy. The pipeline comprises raw prediction, rate-constrained optimization, compact encoding, residual quantization, and joint entropy modeling. Evaluated on multiple benchmark datasets, our approach achieves up to 3.2× higher volumetric compression ratio while maintaining state-of-the-art rendering quality (PSNR/SSIM).
📝 Abstract
Gaussian splatting demonstrates proficiency for 3D scene modeling but suffers from substantial data volume due to inherent primitive redundancy. To enable future photorealistic 3D immersive visual communication applications, significant compression is essential for transmission over the existing Internet infrastructure. Hence, we propose Compressed Gaussian Splatting (CompGS++), a novel framework that leverages compact Gaussian primitives to achieve accurate 3D modeling with substantial size reduction for both static and dynamic scenes. Our design is based on the principle of eliminating redundancy both between and within primitives. Specifically, we develop a comprehensive prediction paradigm to address inter-primitive redundancy through spatial and temporal primitive prediction modules. The spatial primitive prediction module establishes predictive relationships for scene primitives and enables most primitives to be encoded as compact residuals, substantially reducing the spatial redundancy. We further devise a temporal primitive prediction module to handle dynamic scenes, which exploits primitive correlations across timestamps to effectively reduce temporal redundancy. Moreover, we devise a rate-constrained optimization module that jointly minimizes reconstruction error and rate consumption. This module effectively eliminates parameter redundancy within primitives and enhances the overall compactness of scene representations. Comprehensive evaluations across multiple benchmark datasets demonstrate that CompGS++ significantly outperforms existing methods, achieving superior compression performance while preserving accurate scene modeling. Our implementation will be made publicly available on GitHub to facilitate further research.