PDDM: Pseudo Depth Diffusion Model for RGB-PD Semantic Segmentation Based in Complex Indoor Scenes

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In complex indoor semantic segmentation, acquiring real RGB-D data is costly, cross-modal registration is challenging, and sensor noise degrades performance. To address these issues, this paper proposes a novel multi-modal segmentation framework integrating RGB images with pseudo-depth (PD). Our key contributions are: (1) the first pseudo-depth diffusion model (PDDM), which synthesizes high-fidelity PD from RGB inputs; (2) a pseudo-depth aggregation module (PDAM) that unifies heterogeneous PD features from multiple sources; and (3) the first integration of pre-trained text-to-image diffusion models into RGB-PD segmentation, enabling lightweight cross-modal feature fusion and end-to-end joint optimization. Evaluated on NYUv2 and SUNRGB-D, our method achieves state-of-the-art performance, improving mIoU by 6.98 and 2.11 percentage points, respectively. Results demonstrate that pseudo-depth effectively substitutes for real depth while substantially enhancing segmentation accuracy.

Technology Category

Application Category

📝 Abstract
The integration of RGB and depth modalities significantly enhances the accuracy of segmenting complex indoor scenes, with depth data from RGB-D cameras playing a crucial role in this improvement. However, collecting an RGB-D dataset is more expensive than an RGB dataset due to the need for specialized depth sensors. Aligning depth and RGB images also poses challenges due to sensor positioning and issues like missing data and noise. In contrast, Pseudo Depth (PD) from high-precision depth estimation algorithms can eliminate the dependence on RGB-D sensors and alignment processes, as well as provide effective depth information and show significant potential in semantic segmentation. Therefore, to explore the practicality of utilizing pseudo depth instead of real depth for semantic segmentation, we design an RGB-PD segmentation pipeline to integrate RGB and pseudo depth and propose a Pseudo Depth Aggregation Module (PDAM) for fully exploiting the informative clues provided by the diverse pseudo depth maps. The PDAM aggregates multiple pseudo depth maps into a single modality, making it easily adaptable to other RGB-D segmentation methods. In addition, the pre-trained diffusion model serves as a strong feature extractor for RGB segmentation tasks, but multi-modal diffusion-based segmentation methods remain unexplored. Therefore, we present a Pseudo Depth Diffusion Model (PDDM) that adopts a large-scale text-image diffusion model as a feature extractor and a simple yet effective fusion strategy to integrate pseudo depth. To verify the applicability of pseudo depth and our PDDM, we perform extensive experiments on the NYUv2 and SUNRGB-D datasets. The experimental results demonstrate that pseudo depth can effectively enhance segmentation performance, and our PDDM achieves state-of-the-art performance, outperforming other methods by +6.98 mIoU on NYUv2 and +2.11 mIoU on SUNRGB-D.
Problem

Research questions and friction points this paper is trying to address.

Replacing real depth with pseudo depth for semantic segmentation
Integrating RGB and pseudo depth without alignment issues
Enhancing segmentation using diffusion models for multi-modal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pseudo depth instead of real depth
Integrates RGB and pseudo depth with PDAM
Adopts diffusion model for feature extraction
🔎 Similar Papers
No similar papers found.
Xinhua Xu
Xinhua Xu
Peking University
Computer Vision
H
Hong Liu
State Key Laboratory of General Artificial Intelligence, Peking University, Shenzhen Graduate School
J
Jianbing Wu
State Key Laboratory of General Artificial Intelligence, Peking University, Shenzhen Graduate School
Jinfu Liu
Jinfu Liu
State Key Laboratory of General Artificial Intelligence, Peking University, Shenzhen Graduate School