Efficient multi-view training for 3D Gaussian Splatting

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
3D Gaussian Splatting (3DGS) has emerged as an efficient alternative to NeRF, yet its prevalent single-view mini-batch training suffers from high gradient variance, optimization instability, and biased density modeling. To address these limitations, we propose the first low-overhead, high-fidelity multi-view 3DGS training paradigm. Our method introduces a 3D distance-aware D-SSIM loss to enforce geometric consistency across views; incorporates multi-view adaptive Gaussian density control to relax the restrictive single-view assumption; and optimizes the batched multi-view rasterization pipeline. Crucially, our approach retains near single-view training speed while significantly improving reconstruction accuracy and optimization stability. Quantitatively, it achieves PSNR/SSIM gains of 1.2–2.1 dB over baseline 3DGS on multiple benchmark datasets.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has emerged as a preferred choice alongside Neural Radiance Fields (NeRF) in inverse rendering due to its superior rendering speed. Currently, the common approach in 3DGS is to utilize"single-view"mini-batch training, where only one image is processed per iteration, in contrast to NeRF's"multi-view"mini-batch training, which leverages multiple images. We observe that such single-view training can lead to suboptimal optimization due to increased variance in mini-batch stochastic gradients, highlighting the necessity for multi-view training. However, implementing multi-view training in 3DGS poses challenges. Simply rendering multiple images per iteration incurs considerable overhead and may result in suboptimal Gaussian densification due to its reliance on single-view assumptions. To address these issues, we modify the rasterization process to minimize the overhead associated with multi-view training and propose a 3D distance-aware D-SSIM loss and multi-view adaptive density control that better suits multi-view scenarios. Our experiments demonstrate that the proposed methods significantly enhance the performance of 3DGS and its variants, freeing 3DGS from the constraints of single-view training.
Problem

Research questions and friction points this paper is trying to address.

Addresses suboptimal optimization in 3DGS single-view training
Reduces overhead in multi-view training for 3DGS
Improves Gaussian densification for multi-view scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modified rasterization for multi-view training
3D distance-aware D-SSIM loss
Multi-view adaptive density control
🔎 Similar Papers
No similar papers found.