VAR-3D: View-aware Auto-Regressive Model for Text-to-3D Generation via a 3D Tokenizer

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in text-to-3D generation arising from information loss and geometric distortion due to discrete representations, as well as misaligned objectives in conventional two-stage training pipelines. To overcome these limitations, the authors propose a view-aware 3D Vector Quantized Variational Autoencoder (VQ-VAE) that encodes 3D geometry into discrete tokens and integrates a rendering-supervised joint training strategy. This approach simultaneously optimizes token prediction and multi-view image reconstruction, enabling high-fidelity 3D generation through an autoregressive Transformer. The method achieves superior semantic alignment between text and 3D content while preserving geometric consistency, outperforming existing approaches in overall performance.

Technology Category

Application Category

📝 Abstract
Recent advances in auto-regressive transformers have achieved remarkable success in generative modeling. However, text-to-3D generation remains challenging, primarily due to bottlenecks in learning discrete 3D representations. Specifically, existing approaches often suffer from information loss during encoding, causing representational distortion before the quantization process. This effect is further amplified by vector quantization, ultimately degrading the geometric coherence of text-conditioned 3D shapes. Moreover, the conventional two-stage training paradigm induces an objective mismatch between reconstruction and text-conditioned auto-regressive generation. To address these issues, we propose View-aware Auto-Regressive 3D (VAR-3D), which intergrates a view-aware 3D Vector Quantized-Variational AutoEncoder (VQ-VAE) to convert the complex geometric structure of 3D models into discrete tokens. Additionally, we introduce a rendering-supervised training strategy that couples discrete token prediction with visual reconstruction, encouraging the generative process to better preserve visual fidelity and structural consistency relative to the input text. Experiments demonstrate that VAR-3D significantly outperforms existing methods in both generation quality and text-3D alignment.
Problem

Research questions and friction points this paper is trying to address.

text-to-3D generation
discrete 3D representation
information loss
objective mismatch
geometric coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

view-aware
3D tokenization
auto-regressive generation
rendering-supervised training
VQ-VAE
🔎 Similar Papers
No similar papers found.
Z
Zongcheng Han
School of Computer Science and Technology, Soochow University, Suzhou, China
D
Dongyan Cao
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
Haoran Sun
Haoran Sun
University of Electronic Science and Technology of China
LLMNLPAIML
Yu Hong
Yu Hong
Colorado School of Mines; University of Florida
Traumatic Brain injuryComputational MechanicsCavitation