🤖 AI Summary
Existing simplification methods for 3D Gaussian Splatting (3DGS) often rely on non-visual error metrics, struggling to balance model compactness and rendering fidelity. This work proposes a principled simplification framework grounded in analytical visual error quantification, deriving—directly from the 3DGS rendering equation—a per-Gaussian contribution metric that supports both pruning during training and post-training simplification. The method enables efficient error computation with only a single forward pass and incorporates an iterative reweighting mechanism to enhance simplification stability. Experiments demonstrate that the proposed approach consistently outperforms state-of-the-art pruning techniques in both settings, achieving a significantly improved trade-off between compression ratio and rendering quality.
📝 Abstract
Existing 3D Gaussian Splatting simplification methods commonly use importance scores, such as blending weights or sensitivity, to identify redundant Gaussians. However, these scores are not driven by visual error metrics, often leading to suboptimal trade-offs between compactness and rendering fidelity. We present GaussianPOP, a principled simplification framework based on analytical Gaussian error quantification. Our key contribution is a novel error criterion, derived directly from the 3DGS rendering equation, that precisely measures each Gaussian's contribution to the rendered image. By introducing a highly efficient algorithm, our framework enables practical error calculation in a single forward pass. The framework is both accurate and flexible, supporting on-training pruning as well as post-training simplification via iterative error re-quantification for improved stability. Experimental results show that our method consistently outperforms existing state-of-the-art pruning methods across both application scenarios, achieving a superior trade-off between model compactness and high rendering quality.