🤖 AI Summary
To address low visual quality and geometric inconsistency in text-to-3D generation, this paper proposes a diffusion ODE/SDE gradient-guided 3D optimization framework. Methodologically, it introduces (1) the first multi-view-consistent Gaussian noise modeling, integrated with differentiable rendering to enforce cross-view gradient consistency; and (2) flow consistency regularization, overcoming the inherent limitations of Score Distillation Sampling (SDS) under maximum likelihood estimation. Without requiring any 3D supervision, the method significantly improves geometric fidelity, texture sharpness, and multi-view consistency of generated 3D assets. Extensive experiments demonstrate that it outperforms SDS and other baselines across standard text-to-3D benchmarks, achieving state-of-the-art (SOTA) performance in both qualitative and quantitative evaluations.
📝 Abstract
Score Distillation Sampling (SDS) has made significant strides in distilling image-generative models for 3D generation. However, its maximum-likelihood-seeking behavior often leads to degraded visual quality and diversity, limiting its effectiveness in 3D applications. In this work, we propose Consistent Flow Distillation (CFD), which addresses these limitations. We begin by leveraging the gradient of the diffusion ODE or SDE sampling process to guide the 3D generation. From the gradient-based sampling perspective, we find that the consistency of 2D image flows across different viewpoints is important for high-quality 3D generation. To achieve this, we introduce multi-view consistent Gaussian noise on the 3D object, which can be rendered from various viewpoints to compute the flow gradient. Our experiments demonstrate that CFD, through consistent flows, significantly outperforms previous methods in text-to-3D generation.