๐ค AI Summary
This work addresses explicit surface reconstruction from unoriented 3D point cloudsโwithout normal vector priors. We propose the first normal-free, two-stage neural reconstruction framework. In the first stage, a local geometric encoder coupled with a differentiable triangle prediction network directly generates an initial triangular mesh from unordered point clouds. The second stage employs optimization-based per-point offset learning to enhance cross-scene generalization. Our method eliminates the dual dependency of implicit representations on normal inputs and post-hoc Marching Cubes extraction, achieving superior fidelity in sharp edges and fine geometric details while preserving global structural integrity. Evaluated on both small-scale objects and large-scale open surfaces, our approach surpasses state-of-the-art methods in surface accuracy and geometric detail preservation.
๐ Abstract
Neural surface reconstruction has been dominated by implicit representations with marching cubes for explicit surface extraction. However, those methods typically require high-quality normals for accurate reconstruction. We propose OffsetOPT, a method that reconstructs explicit surfaces directly from 3D point clouds and eliminates the need for point normals. The approach comprises two stages: first, we train a neural network to predict surface triangles based on local point geometry, given uniformly distributed training point clouds. Next, we apply the frozen network to reconstruct surfaces from unseen point clouds by optimizing a per-point offset to maximize the accuracy of triangle predictions. Compared to state-of-the-art methods, OffsetOPT not only excels at reconstructing overall surfaces but also significantly preserves sharp surface features. We demonstrate its accuracy on popular benchmarks, including small-scale shapes and large-scale open surfaces.