π€ AI Summary
In sparse-view settings, 3D Gaussian Splatting (3DGS) suffers from low geometric reconstruction accuracy: existing methods rely on non-local depth regularization, limiting fine-grained structural modeling, while conventional smoothing strategies ignore semantic boundaries, degrading critical edges and textures. To address this, we propose a depth- and edge-aware multi-level regularization framework. Our approach innovatively integrates hierarchical depth supervision, Canny-edge-guided semantic masking, and RGB-guided total variation lossβjointly suppressing depth noise while explicitly preserving structural boundaries and high-frequency details. Evaluated on multiple sparse-view novel view synthesis benchmarks, our method significantly improves geometric consistency and visual fidelity, particularly around complex structures and object boundaries, outperforming current state-of-the-art approaches.
π Abstract
3D Gaussian Splatting (3DGS) represents a significant advancement in the field of efficient and high-fidelity novel view synthesis. Despite recent progress, achieving accurate geometric reconstruction under sparse-view conditions remains a fundamental challenge. Existing methods often rely on non-local depth regularization, which fails to capture fine-grained structures and is highly sensitive to depth estimation noise. Furthermore, traditional smoothing methods neglect semantic boundaries and indiscriminately degrade essential edges and textures, consequently limiting the overall quality of reconstruction. In this work, we propose DET-GS, a unified depth and edge-aware regularization framework for 3D Gaussian Splatting. DET-GS introduces a hierarchical geometric depth supervision framework that adaptively enforces multi-level geometric consistency, significantly enhancing structural fidelity and robustness against depth estimation noise. To preserve scene boundaries, we design an edge-aware depth regularization guided by semantic masks derived from Canny edge detection. Furthermore, we introduce an RGB-guided edge-preserving Total Variation loss that selectively smooths homogeneous regions while rigorously retaining high-frequency details and textures. Extensive experiments demonstrate that DET-GS achieves substantial improvements in both geometric accuracy and visual fidelity, outperforming state-of-the-art (SOTA) methods on sparse-view novel view synthesis benchmarks.