UniGS: Modeling Unitary 3D Gaussians for Novel View Synthesis from Sparse-view Images

📅 2024-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address novel view synthesis from sparse input views, this paper proposes an end-to-end differentiable 3D Gaussian representation learning framework. The method jointly optimizes globally consistent 3D Gaussian parameters within a unified “unit-system” world coordinate space—eliminating view-wise regression and stitching artifacts. It introduces Multi-View Dynamic Cross-Attention (MVDFA), where 3D Gaussians serve as queries and image features as keys/values, enabling flexible handling of arbitrary numbers of input views without retraining. Integrated with a DETR-inspired architecture and adaptive densification, the framework achieves state-of-the-art performance on the GSO benchmark, improving PSNR by 4.2 dB over prior work. Crucially, it significantly suppresses ghosting artifacts while maintaining constant memory overhead—unaffected by the number of input views.

Technology Category

Application Category

📝 Abstract
In this work, we introduce UniGS, a novel 3D Gaussian reconstruction and novel view synthesis model that predicts a high-fidelity representation of 3D Gaussians from arbitrary number of posed sparse-view images. Previous methods often regress 3D Gaussians locally on a per-pixel basis for each view and then transfer them to world space and merge them through point concatenation. In contrast, Our approach involves modeling unitary 3D Gaussians in world space and updating them layer by layer. To leverage information from multi-view inputs for updating the unitary 3D Gaussians, we develop a DETR (DEtection TRansformer)-like framework, which treats 3D Gaussians as queries and updates their parameters by performing multi-view cross-attention (MVDFA) across multiple input images, which are treated as keys and values. This approach effectively avoids `ghosting' issue and allocates more 3D Gaussians to complex regions. Moreover, since the number of 3D Gaussians used as decoder queries is independent of the number of input views, our method allows arbitrary number of multi-view images as input without causing memory explosion or requiring retraining. Extensive experiments validate the advantages of our approach, showcasing superior performance over existing methods quantitatively (improving PSNR by 4.2 dB when trained on Objaverse and tested on the GSO benchmark) and qualitatively. The code will be released at https://github.com/jwubz123/UNIG.
Problem

Research questions and friction points this paper is trying to address.

Modeling unitary 3D Gaussians for sparse-view novel view synthesis
Avoiding ghosting and optimizing 3D Gaussian allocation
Enabling arbitrary multi-view input without memory or retraining constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modeling unitary 3D Gaussians in world space
Using DETR-like framework for multi-view updates
Independent query count avoids memory explosion
J
Jiamin Wu
Hong Kong University of Science and Technology, International Digital Economy Academy (IDEA)
Kenkun Liu
Kenkun Liu
The Chinese University of Hong Kong (Shenzhen)
Computer Vision/GraphicsMachine Learning
Y
Yukai Shi
International Digital Economy Academy (IDEA), Tsinghua University
Xiaoke Jiang
Xiaoke Jiang
Reseach@IDEA
Computer VisionIndustrial VisionComputer Networking
Y
Yuan Yao
Hong Kong University of Science and Technology
L
Lei Zhang
International Digital Economy Academy (IDEA)