🤖 AI Summary
This work proposes DCCVT, the first fully differentiable Clipped Centroidal Voronoi Tessellation algorithm, to address the limitations of existing differentiable 3D mesh extraction methods—such as Marching Cubes—which often produce low-quality meshes, and the lack of a differentiable formulation for high-quality Clipped CVT that prevents its integration into deep learning pipelines. DCCVT enables end-to-end reconstruction of high-fidelity 3D meshes directly from noisy signed distance fields (SDFs) by seamlessly combining differentiable geometric optimization with deep learning–based SDF estimation. Evaluated on synthetic data, the method significantly outperforms current approaches, achieving state-of-the-art performance in both mesh quality and reconstruction fidelity.
📝 Abstract
While Marching Cubes (MC) and Marching Tetrahedra (MTet) are widely adopted in 3D reconstruction pipelines due to their simplicity and efficiency, their differentiable variants remain suboptimal for mesh extraction. This often limits the quality of 3D meshes reconstructed from point clouds or images in learning-based frameworks. In contrast, clipped CVTs offer stronger theoretical guarantees and yield higher-quality meshes. However, the lack of a differentiable formulation has prevented their integration into modern machine learning pipelines. To bridge this gap, we propose DCCVT, a differentiable algorithm that extracts high-quality 3D meshes from noisy signed distance fields (SDFs) using clipped CVTs. We derive a fully differentiable formulation for computing clipped CVTs and demonstrate its integration with deep learning-based SDF estimation to reconstruct accurate 3D meshes from input point clouds. Our experiments with synthetic data demonstrate the superior ability of DCCVT against state-of-the-art methods in mesh quality and reconstruction fidelity. https://wylliamcantincharawi.dev/DCCVT.github.io/