Consistency Diffusion Models for Single-Image 3D Reconstruction with Priors

πŸ“… 2025-01-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses geometric inconsistency and detail distortion in single-image 3D point cloud reconstruction. We propose a consistency-aware diffusion model that jointly models 2D image priors and 3D structural priors within a Bayesian framework, enabling effective cross-modal constraint fusion. Key contributions include: (1) the first incorporation of an initial 3D point cloud into the variational lower bound as an explicit geometric regularization term to strengthen diffusion training; and (2) a differentiable 2D→3D feature projection mechanism that enables precise image-prior guidance in point cloud space. Our method achieves state-of-the-art performance on both synthetic and real-world benchmarks, significantly improving reconstruction completeness, topological consistency, and fine-grained geometric fidelity.

Technology Category

Application Category

πŸ“ Abstract
This paper delves into the study of 3D point cloud reconstruction from a single image. Our objective is to develop the Consistency Diffusion Model, exploring synergistic 2D and 3D priors in the Bayesian framework to ensure superior consistency in the reconstruction process, a challenging yet critical requirement in this field. Specifically, we introduce a pioneering training framework under diffusion models that brings two key innovations. First, we convert 3D structural priors derived from the initial 3D point cloud as a bound term to increase evidence in the variational Bayesian framework, leveraging these robust intrinsic priors to tightly govern the diffusion training process and bolster consistency in reconstruction. Second, we extract and incorporate 2D priors from the single input image, projecting them onto the 3D point cloud to enrich the guidance for diffusion training. Our framework not only sidesteps potential model learning shifts that may arise from directly imposing additional constraints during training but also precisely transposes the 2D priors into the 3D domain. Extensive experimental evaluations reveal that our approach sets new benchmarks in both synthetic and real-world datasets. The code is included with the submission.
Problem

Research questions and friction points this paper is trying to address.

2D to 3D Reconstruction
Consistency Preservation
Stereo Imaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Consistent Diffusion Model
3D structural information
2D visual cues integration
πŸ”Ž Similar Papers
No similar papers found.
C
Chenru Jiang
Duke Kunshan University, Xi’an Jiaotong-Liverpool University
Chengrui Zhang
Chengrui Zhang
XJTLU
Deep Learning
X
Xi Yang
Xi’an Jiaotong-Liverpool University
J
Jie Sun
Xi’an Jiaotong-Liverpool University
Kaizhu Huang
Kaizhu Huang
Professor, Duke Kunshan University
Generalization & RobustnessStatistical Learning ThoeryTrustworthy AI