ContraGS: Codebook-Condensed and Trainable Gaussian Splatting for Fast, Memory-Efficient Reconstruction

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
3D Gaussian Splatting (3DGS) achieves high-fidelity real-time novel view synthesis but suffers from excessive GPU memory consumption and low training/rendering efficiency due to the large number of Gaussians. This paper proposes the first differentiable 3DGS training framework supporting codebook-based compression: Gaussian parameter vectors are quantized into a compact codebook space, formulated as a Bayesian inference problem, and optimized end-to-end in the compressed domain via MCMC sampling to overcome the non-differentiability of codebook index selection. Crucially, our method retains the original Gaussian count and preserves reconstruction quality—achieving PSNR/SSIM competitive with state-of-the-art methods—while reducing peak training memory by 3.49× and accelerating training and rendering by 1.36× and 1.88×, respectively. The core contribution is the first differentiable end-to-end optimization framework operating directly in the compressed domain, uniquely balancing fidelity and efficiency.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) is a state-of-art technique to model real-world scenes with high quality and real-time rendering. Typically, a higher quality representation can be achieved by using a large number of 3D Gaussians. However, using large 3D Gaussian counts significantly increases the GPU device memory for storing model parameters. A large model thus requires powerful GPUs with high memory capacities for training and has slower training/rendering latencies due to the inefficiencies of memory access and data movement. In this work, we introduce ContraGS, a method to enable training directly on compressed 3DGS representations without reducing the Gaussian Counts, and thus with a little loss in model quality. ContraGS leverages codebooks to compactly store a set of Gaussian parameter vectors throughout the training process, thereby significantly reducing memory consumption. While codebooks have been demonstrated to be highly effective at compressing fully trained 3DGS models, directly training using codebook representations is an unsolved challenge. ContraGS solves the problem of learning non-differentiable parameters in codebook-compressed representations by posing parameter estimation as a Bayesian inference problem. To this end, ContraGS provides a framework that effectively uses MCMC sampling to sample over a posterior distribution of these compressed representations. With ContraGS, we demonstrate that ContraGS significantly reduces the peak memory during training (on average 3.49X) and accelerated training and rendering (1.36X and 1.88X on average, respectively), while retraining close to state-of-art quality.
Problem

Research questions and friction points this paper is trying to address.

Reducing GPU memory consumption in 3D Gaussian Splatting training
Enabling direct training on compressed representations without quality loss
Solving non-differentiable parameter learning in codebook-compressed models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Codebook-condensed Gaussian splatting for memory efficiency
Bayesian inference for non-differentiable parameter learning
MCMC sampling over compressed representation posterior distribution
🔎 Similar Papers
No similar papers found.