🤖 AI Summary
This paper addresses the problem of visibility determination for 3D point clouds from a given viewpoint. We propose the first end-to-end deep learning-based binary classification method, overcoming key limitations of traditional geometric approaches—such as Hidden Point Removal (HPR)—including low computational efficiency, sensitivity to noise, poor handling of concave regions, and degradation on low-density point clouds. Our method employs a 3D U-Net to extract view-invariant, point-wise features, fuses them with viewpoint-direction encoding, and applies a shared MLP to predict per-point visibility. Ground-truth visibility labels are generated via differentiable rendering. Extensive evaluation on ShapeNet, the ABC Dataset, and real-world scans demonstrates substantial improvements over HPR—up to 126× faster inference—while exhibiting strong generalization and robustness to noise. Moreover, our visibility predictions significantly enhance downstream tasks including point cloud visualization, surface reconstruction, and normal estimation.
📝 Abstract
Point clouds are widely used representations of 3D data, but determining the visibility of points from a given viewpoint remains a challenging problem due to their sparse nature and lack of explicit connectivity. Traditional methods, such as Hidden Point Removal (HPR), face limitations in computational efficiency, robustness to noise, and handling concave regions or low-density point clouds. In this paper, we propose a novel approach to visibility determination in point clouds by formulating it as a binary classification task. The core of our network consists of a 3D U-Net that extracts view-independent point-wise features and a shared multi-layer perceptron (MLP) that predicts point visibility using the extracted features and view direction as inputs. The network is trained end-to-end with ground-truth visibility labels generated from rendered 3D models. Our method significantly outperforms HPR in both accuracy and computational efficiency, achieving up to 126 times speedup on large point clouds. Additionally, our network demonstrates robustness to noise and varying point cloud densities and generalizes well to unseen shapes. We validate the effectiveness of our approach through extensive experiments on the ShapeNet, ABC Dataset and real-world datasets, showing substantial improvements in visibility accuracy. We also demonstrate the versatility of our method in various applications, including point cloud visualization, surface reconstruction, normal estimation, shadow rendering, and viewpoint optimization. Our code and models are available at https://github.com/octree-nn/neural-visibility.