SeMv-3D: Towards Concurrency of Semantic and Multi-view Consistency in General Text-to-3D Generation

📅 2024-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses General Text-to-3D (GT23D) generation, tackling two core challenges simultaneously: semantic consistency (text-3D alignment) and multi-view consistency (cross-view geometric coherence). We propose a joint optimization framework with dual objectives, introducing Triplane Prior Learning (TPL) — the first method to learn geometry-aware triplane priors — and Prior-based Semantic Alignment for Triplanes (SAT), enabling synergistic enhancement of both semantic and geometric fidelity. To further improve cross-view coherence and text alignment, we design Orthogonal Attention, attention-driven cross-modal feature alignment, and diffusion-based arbitrary-view synthesis. Our approach achieves new state-of-the-art performance in multi-view consistency while maintaining top-tier semantic consistency. Experiments demonstrate significant improvements in 3D structural plausibility and text fidelity, establishing a robust foundation for high-fidelity, controllable GT23D generation.

Technology Category

Application Category

📝 Abstract
General Text-to-3D (GT23D) generation is crucial for creating diverse 3D content across objects and scenes, yet it faces two key challenges: 1) ensuring semantic consistency between input text and generated 3D models, and 2) maintaining multi-view consistency across different perspectives within 3D. Existing approaches typically address only one of these challenges, often leading to suboptimal results in semantic fidelity and structural coherence. To overcome these limitations, we propose SeMv-3D, a novel framework that jointly enhances semantic alignment and multi-view consistency in GT23D generation. At its core, we introduce Triplane Prior Learning (TPL), which effectively learns triplane priors by capturing spatial correspondences across three orthogonal planes using a dedicated Orthogonal Attention mechanism, thereby ensuring geometric consistency across viewpoints. Additionally, we present Prior-based Semantic Aligning in Triplanes (SAT), which enables consistent any-view synthesis by leveraging attention-based feature alignment to reinforce the correspondence between textual semantics and triplane representations. Extensive experiments demonstrate that our method sets a new state-of-the-art in multi-view consistency, while maintaining competitive performance in semantic consistency compared to methods focused solely on semantic alignment. These results emphasize the remarkable ability of our approach to effectively balance and excel in both dimensions, establishing a new benchmark in the field.
Problem

Research questions and friction points this paper is trying to address.

Ensuring semantic consistency between text and 3D models
Maintaining multi-view consistency across 3D perspectives
Balancing semantic and geometric coherence in text-to-3D generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Triplane Prior Learning for geometric consistency
Orthogonal Attention for spatial correspondences
Prior-based Semantic Aligning for text-triplane correspondence
🔎 Similar Papers
No similar papers found.
Xiao Cai
Xiao Cai
Digital Geography Lab, University of Helsinki
urban micro-mobilitytravel behavioraccessibilitysocial equityartificial intelligence
Pengpeng Zeng
Pengpeng Zeng
Tongji University
computer vision
Lianli Gao
Lianli Gao
UESTC
Vision and Language
Junchen Zhu
Junchen Zhu
University of Electronic Science and Technology of China
AIGCMultimodal Large Model
J
Jiaxin Zhang
University of Electronic Science and Technology of China
S
Sitong Su
University of Electronic Science and Technology of China
H
Hengtao Shen
University of Electronic Science and Technology of China
J
Jingkuan Song
University of Electronic Science and Technology of China