KeyGS: A Keyframe-Centric Gaussian Splatting Method for Monocular Image Sequences

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) reconstruction methods for monocular image sequences critically depend on accurate camera poses and suffer from slow convergence and local minima when initialized without pose priors. This work proposes a keyframe-centric joint optimization framework coupled with a coarse-to-fine, frequency-aware densification strategy, enabling end-to-end co-optimization of camera poses and Gaussian parameters. Our approach integrates rapid Structure-from-Motion (SfM) initialization, explicit 3DGS representation, joint pose-density optimization, and frequency-adaptive Gaussian proliferation—effectively mitigating pose drift induced by high-frequency noise. As a result, training time is reduced from hours to minutes, while achieving state-of-the-art performance—outperforming existing depth-free and feature-matching-free methods—in both novel-view synthesis (PSNR) and camera pose estimation accuracy.

Technology Category

Application Category

📝 Abstract
Reconstructing high-quality 3D models from sparse 2D images has garnered significant attention in computer vision. Recently, 3D Gaussian Splatting (3DGS) has gained prominence due to its explicit representation with efficient training speed and real-time rendering capabilities. However, existing methods still heavily depend on accurate camera poses for reconstruction. Although some recent approaches attempt to train 3DGS models without the Structure-from-Motion (SfM) preprocessing from monocular video datasets, these methods suffer from prolonged training times, making them impractical for many applications. In this paper, we present an efficient framework that operates without any depth or matching model. Our approach initially uses SfM to quickly obtain rough camera poses within seconds, and then refines these poses by leveraging the dense representation in 3DGS. This framework effectively addresses the issue of long training times. Additionally, we integrate the densification process with joint refinement and propose a coarse-to-fine frequency-aware densification to reconstruct different levels of details. This approach prevents camera pose estimation from being trapped in local minima or drifting due to high-frequency signals. Our method significantly reduces training time from hours to minutes while achieving more accurate novel view synthesis and camera pose estimation compared to previous methods.
Problem

Research questions and friction points this paper is trying to address.

3D Reconstruction
Camera Position Dependence
Training Time Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

KeyGS
3D Reconstruction
Camera Position Estimation