Graph-Guided Dual-Level Augmentation for 3D Scene Segmentation

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D point cloud segmentation data augmentation methods primarily focus on local geometric transformations or semantic recombination, neglecting global structural dependencies among scene-level objects. To address this, we propose a graph-guided dual-level augmentation framework: first, learning object relationships from real point clouds to construct a structure-guiding graph; second, jointly enforcing local geometric-semantic constraints and global topological alignment to generate high-fidelity synthetic scenes that preserve both structural integrity and semantic consistency. This work is the first to explicitly incorporate graph-structured modeling into 3D point cloud data augmentation, balancing local geometric realism with global scene plausibility. Evaluated on ScanNet, S3DIS, and SemanticKITTI benchmarks, our augmented data consistently improves the generalization performance of diverse segmentation models, demonstrating the effectiveness and cross-dataset applicability of structure-aware augmentation.

Technology Category

Application Category

📝 Abstract
3D point cloud segmentation aims to assign semantic labels to individual points in a scene for fine-grained spatial understanding. Existing methods typically adopt data augmentation to alleviate the burden of large-scale annotation. However, most augmentation strategies only focus on local transformations or semantic recomposition, lacking the consideration of global structural dependencies within scenes. To address this limitation, we propose a graph-guided data augmentation framework with dual-level constraints for realistic 3D scene synthesis. Our method learns object relationship statistics from real-world data to construct guiding graphs for scene generation. Local-level constraints enforce geometric plausibility and semantic consistency between objects, while global-level constraints maintain the topological structure of the scene by aligning the generated layout with the guiding graph. Extensive experiments on indoor and outdoor datasets demonstrate that our framework generates diverse and high-quality augmented scenes, leading to consistent improvements in point cloud segmentation performance across various models.
Problem

Research questions and friction points this paper is trying to address.

Enhance 3D point cloud segmentation via structural augmentation
Address lack of global structural dependencies in augmentation
Improve scene synthesis with dual-level geometric constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-guided framework for 3D scene synthesis
Dual-level constraints ensure geometric plausibility
Aligns generated layout with global topology
🔎 Similar Papers
No similar papers found.
H
Hongbin Lin
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Y
Yifan Jiang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
J
Juangui Xu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Jesse Jiaxi Xu
Jesse Jiaxi Xu
University of Toronto
Y
Yi Lu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Z
Zhengyu Hu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Ying-Cong Chen
Ying-Cong Chen
Hong Kong University of Science and Technology (Guangzhou)
Computer Vision and Pattern Recognition
H
Hao Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China