🤖 AI Summary
To address geometric drift in 3D Gaussian splatting for multi-view reconstruction—caused by inconsistencies between per-view surface normals and depth estimates, especially pronounced between adjacent views—this paper proposes a multi-view normal- and distance-guided reconstruction framework. Methodologically, it introduces a cross-view distance reprojection regularization term and a normal enhancement module, jointly optimizing pixel-wise normal correspondence, 3D normal alignment, and multi-view depth consistency. By explicitly incorporating geometric priors, the approach mitigates per-view estimation bias and enhances surface reconstruction robustness. Evaluated on indoor and outdoor small-scale scenes, the method significantly outperforms state-of-the-art baselines. Quantitative metrics—including Chamfer distance and normal consistency—as well as qualitative visual comparisons demonstrate superior geometric accuracy and rendering fidelity.
📝 Abstract
3D Gaussian Splatting (3DGS) achieves remarkable results in the field of surface reconstruction. However, when Gaussian normal vectors are aligned within the single-view projection plane, while the geometry appears reasonable in the current view, biases may emerge upon switching to nearby views. To address the distance and global matching challenges in multi-view scenes, we design multi-view normal and distance-guided Gaussian splatting. This method achieves geometric depth unification and high-accuracy reconstruction by constraining nearby depth maps and aligning 3D normals. Specifically, for the reconstruction of small indoor and outdoor scenes, we propose a multi-view distance reprojection regularization module that achieves multi-view Gaussian alignment by computing the distance loss between two nearby views and the same Gaussian surface. Additionally, we develop a multi-view normal enhancement module, which ensures consistency across views by matching the normals of pixel points in nearby views and calculating the loss. Extensive experimental results demonstrate that our method outperforms the baseline in both quantitative and qualitative evaluations, significantly enhancing the surface reconstruction capability of 3DGS.