CompGS++: Compressed Gaussian Splatting for Static and Dynamic Scene Representation

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gaussian splatting representations suffer from significant redundancy, leading to high storage and transmission overhead—hindering their practical deployment in 3D immersive visual communication. To address this, we propose a compact Gaussian primitive modeling framework that jointly optimizes fidelity and compression. Our method introduces, for the first time, a spatio-temporal joint prediction paradigm to eliminate inter-primitive redundancy, coupled with rate-distortion-constrained optimization to suppress intra-primitive parameter redundancy. The pipeline comprises raw prediction, rate-constrained optimization, compact encoding, residual quantization, and joint entropy modeling. Evaluated on multiple benchmark datasets, our approach achieves up to 3.2× higher volumetric compression ratio while maintaining state-of-the-art rendering quality (PSNR/SSIM).

Technology Category

Application Category

📝 Abstract
Gaussian splatting demonstrates proficiency for 3D scene modeling but suffers from substantial data volume due to inherent primitive redundancy. To enable future photorealistic 3D immersive visual communication applications, significant compression is essential for transmission over the existing Internet infrastructure. Hence, we propose Compressed Gaussian Splatting (CompGS++), a novel framework that leverages compact Gaussian primitives to achieve accurate 3D modeling with substantial size reduction for both static and dynamic scenes. Our design is based on the principle of eliminating redundancy both between and within primitives. Specifically, we develop a comprehensive prediction paradigm to address inter-primitive redundancy through spatial and temporal primitive prediction modules. The spatial primitive prediction module establishes predictive relationships for scene primitives and enables most primitives to be encoded as compact residuals, substantially reducing the spatial redundancy. We further devise a temporal primitive prediction module to handle dynamic scenes, which exploits primitive correlations across timestamps to effectively reduce temporal redundancy. Moreover, we devise a rate-constrained optimization module that jointly minimizes reconstruction error and rate consumption. This module effectively eliminates parameter redundancy within primitives and enhances the overall compactness of scene representations. Comprehensive evaluations across multiple benchmark datasets demonstrate that CompGS++ significantly outperforms existing methods, achieving superior compression performance while preserving accurate scene modeling. Our implementation will be made publicly available on GitHub to facilitate further research.
Problem

Research questions and friction points this paper is trying to address.

Reduces data volume in 3D Gaussian splatting scenes
Compresses static and dynamic 3D scenes efficiently
Minimizes redundancy between and within Gaussian primitives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compact Gaussian primitives reduce redundancy
Spatial-temporal prediction modules minimize redundancy
Rate-constrained optimization enhances compactness
🔎 Similar Papers
No similar papers found.
X
Xiangrui Liu
Department of Computer Science, City University of Hong Kong, Hong Kong, China
Xinju Wu
Xinju Wu
City University of Hong Kong
3D visionPoint cloud compression
S
Shiqi Wang
Department of Computer Science, City University of Hong Kong, Hong Kong, China
Z
Zhu Li
Department of Computer Science and Electrical Engineering, University of Missouri–Kansas City, Kansas City, MO 64110 USA
Sam Kwong
Sam Kwong
Lingnan Univerity, Hong Kong
Video CodingEvolutionary ComputationMachine Learning and pattern recognition