🤖 AI Summary
This work addresses the challenges in text-to-3D generation arising from information loss and geometric distortion due to discrete representations, as well as misaligned objectives in conventional two-stage training pipelines. To overcome these limitations, the authors propose a view-aware 3D Vector Quantized Variational Autoencoder (VQ-VAE) that encodes 3D geometry into discrete tokens and integrates a rendering-supervised joint training strategy. This approach simultaneously optimizes token prediction and multi-view image reconstruction, enabling high-fidelity 3D generation through an autoregressive Transformer. The method achieves superior semantic alignment between text and 3D content while preserving geometric consistency, outperforming existing approaches in overall performance.
📝 Abstract
Recent advances in auto-regressive transformers have achieved remarkable success in generative modeling. However, text-to-3D generation remains challenging, primarily due to bottlenecks in learning discrete 3D representations. Specifically, existing approaches often suffer from information loss during encoding, causing representational distortion before the quantization process. This effect is further amplified by vector quantization, ultimately degrading the geometric coherence of text-conditioned 3D shapes. Moreover, the conventional two-stage training paradigm induces an objective mismatch between reconstruction and text-conditioned auto-regressive generation. To address these issues, we propose View-aware Auto-Regressive 3D (VAR-3D), which intergrates a view-aware 3D Vector Quantized-Variational AutoEncoder (VQ-VAE) to convert the complex geometric structure of 3D models into discrete tokens. Additionally, we introduce a rendering-supervised training strategy that couples discrete token prediction with visual reconstruction, encouraging the generative process to better preserve visual fidelity and structural consistency relative to the input text. Experiments demonstrate that VAR-3D significantly outperforms existing methods in both generation quality and text-3D alignment.