🤖 AI Summary
For surface reconstruction and novel view synthesis from sparse views, existing signed distance function (SDF)-based methods struggle to recover fine geometric details, while 3D Gaussian Splatting (3DGS) lacks global geometric consistency. This paper proposes the first bidirectional co-optimization framework integrating SDF and 3DGS: SDF models globally consistent coarse geometry, while 3DGS enables high-fidelity rendering. Geometrically guided rendering and rendering-feedback-driven detail refinement jointly refine both representations in an iterative manner. The framework thus preserves structural integrity while significantly enhancing local geometric fidelity. Extensive experiments on the DTU and MobileBrick datasets demonstrate state-of-the-art performance in both surface reconstruction accuracy (e.g., Chamfer distance, F-Score) and novel view synthesis quality (e.g., PSNR, SSIM, LPIPS), outperforming prior approaches across all metrics.
📝 Abstract
Surface reconstruction and novel view rendering from sparse-view images are challenging. Signed Distance Function (SDF)-based methods struggle with fine details, while 3D Gaussian Splatting (3DGS)-based approaches lack global geometry coherence. We propose a novel hybrid method that combines the strengths of both approaches: SDF captures coarse geometry to enhance 3DGS-based rendering, while newly rendered images from 3DGS refine the details of SDF for accurate surface reconstruction. As a result, our method surpasses state-of-the-art approaches in surface reconstruction and novel view synthesis on the DTU and MobileBrick datasets. Code will be released at https://github.com/Gaozihui/SurfaceSplat.