๐ค AI Summary
Existing point cloud upsampling methods commonly adopt local patch-based inputs, yet lack a systematic analysis of the fundamental differences between full-model and patch-based input paradigms. Method: This work first reveals that input organization is a critical performance determinantโpatch-based input significantly outperforms full-model uniform partitioning (12.3% reduction in Chamfer Distance on PU1K). We propose a model-level uniform partitioning strategy that preserves shape integrity and instantiate a patch-wise training paradigm within the PU-GCN framework. Leveraging geometric consistency evaluation and ablation studies, we identify a strong coupling effect between input granularity and feature aggregation modules. Contribution/Results: Our study establishes theoretical foundations and practical guidelines for data organization and network architecture design in point cloud upsampling, bridging a critical gap between input representation and model behavior.
๐ Abstract
In recent years, point cloud upsampling has been widely applied in fields such as 3D reconstruction and surface generation. However, existing point cloud upsampling inputs are all patch based, and there is no research discussing the differences and principles between point cloud model full input and patch based input. In order to compare with patch based point cloud input, this article proposes a new data input method, which divides the full point cloud model to ensure shape integrity while training PU-GCN. This article was validated on the PU1K and ABC datasets, but the results showed that Patch based performance is better than model based full input i.e. Average Segment input. Therefore, this article explores the data input factors and model modules that affect the upsampling results of point clouds.