🤖 AI Summary
Transient objects—such as pedestrians and vehicles—severely degrade the fidelity of 3D scene reconstruction from video. To address this, we propose a two-stage robust filtering framework built upon 3D Gaussian Splatting (3DGS). In the first stage, we exploit inherent differences in training dynamics between transient and static elements during 3DGS optimization to achieve unsupervised dynamic/static separation—without supervision or prior assumptions. The second stage integrates Mask2Former-based semantic segmentation with bidirectional optical flow and trajectory tracking to refine object boundaries and enforce temporal consistency. Our method requires no manual annotations or domain-specific priors. Evaluated on both sparse and dense real-world video datasets, it significantly outperforms state-of-the-art approaches, delivering high-fidelity, temporally coherent, and robust 3D reconstructions.
📝 Abstract
Transient objects in video sequences can significantly degrade the quality of 3D scene reconstructions. To address this challenge, we propose T-3DGS, a novel framework that robustly filters out transient distractors during 3D reconstruction using Gaussian Splatting. Our framework consists of two steps. First, we employ an unsupervised classification network that distinguishes transient objects from static scene elements by leveraging their distinct training dynamics within the reconstruction process. Second, we refine these initial detections by integrating an off-the-shelf segmentation method with a bidirectional tracking module, which together enhance boundary accuracy and temporal coherence. Evaluations on both sparsely and densely captured video datasets demonstrate that T-3DGS significantly outperforms state-of-the-art approaches, enabling high-fidelity 3D reconstructions in challenging, real-world scenarios.