🤖 AI Summary
To address the excessive model size and deployment constraints of 3D Gaussian Splatting (3DGS), this paper proposes a lightweight training paradigm that achieves efficient model compression without compromising rendering fidelity. Methodologically, we introduce a multi-stage adaptive optimization strategy integrating gradient-sensitivity-driven parameter pruning, anisotropic covariance regularization, and joint density-opacity scheduling. Our approach is the first to compress the 3DGS model volume to just 10% of its original size—i.e., a 90% reduction—while preserving identical PSNR and SSIM metrics compared to the baseline 3DGS. Extensive evaluation across multiple standard datasets demonstrates a 2.3× speedup in inference latency and a 76% reduction in GPU memory consumption, significantly enhancing the practical deployability of 3DGS in resource-constrained scenarios.
📝 Abstract
3D Gaussian Splatting (3DGS) proposes an efficient solution for novel view synthesis. Its framework provides fast and high-fidelity rendering. Although less complex than other solutions such as Neural Radiance Fields (NeRF), there are still some challenges building smaller models without sacrificing quality. In this study, we perform a careful analysis of 3DGS training process and propose a new optimization methodology. Our Better Optimized Gaussian Splatting (BOGausS) solution is able to generate models up to ten times lighter than the original 3DGS with no quality degradation, thus significantly boosting the performance of Gaussian Splatting compared to the state of the art.