Class-Partitioned VQ-VAE and Latent Flow Matching for Point Cloud Scene Generation

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D scene generation methods struggle to accurately decode category-consistent multi-object point clouds from latent features, particularly in complex scenes. This work proposes an end-to-end framework that integrates a Class-Partitioned Vector Quantized Variational Autoencoder (CPVQ-VAE) with a Latent Flow Matching Model (LFMM) specifically designed for scene generation, enabling the synthesis of semantically coherent point cloud scenes without relying on external object databases. The approach innovatively introduces a class-partitioned codebook and a class-aware running average update mechanism, effectively mitigating codebook collapse and ensuring category-aligned feature decoding. Evaluated on complex indoor living room scenes, the method reduces Chamfer distance and Point2Mesh error by 70.4% and 72.3%, respectively, substantially improving both geometric fidelity and semantic consistency.

Technology Category

Application Category

📝 Abstract
Most 3D scene generation methods are limited to only generating object bounding box parameters while newer diffusion methods also generate class labels and latent features. Using object size or latent feature, they then retrieve objects from a predefined database. For complex scenes of varied, multi-categorical objects, diffusion-based latents cannot be effectively decoded by current autoencoders into the correct point cloud objects which agree with target classes. We introduce a Class-Partitioned Vector Quantized Variational Autoencoder (CPVQ-VAE) that is trained to effectively decode object latent features, by employing a pioneering $\textit{class-partitioned codebook}$ where codevectors are labeled by class. To address the problem of $\textit{codebook collapse}$, we propose a $\textit{class-aware}$ running average update which reinitializes dead codevectors within each partition. During inference, object features and class labels, both generated by a Latent-space Flow Matching Model (LFMM) designed specifically for scene generation, are consumed by the CPVQ-VAE. The CPVQ-VAE's class-aware inverse look-up then maps generated latents to codebook entries that are decoded to class-specific point cloud shapes. Thereby, we achieve pure point cloud generation without relying on an external objects database for retrieval. Extensive experiments reveal that our method reliably recovers plausible point cloud scenes, with up to 70.4% and 72.3% reduction in Chamfer and Point2Mesh errors on complex living room scenes.
Problem

Research questions and friction points this paper is trying to address.

point cloud generation
class-conditional decoding
codebook collapse
3D scene synthesis
latent representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Class-Partitioned VQ-VAE
Latent Flow Matching
Class-Aware Codebook
Point Cloud Generation
Codebook Collapse Mitigation
🔎 Similar Papers
No similar papers found.
Dasith de Silva Edirimuni
Dasith de Silva Edirimuni
Research Fellow, The University of Western Australia
Machine LearningArtificial IntelligenceComputer Vision
A
A. Mian
The University of Western Australia, 35 Stirling Highway, Perth, WA 6009 Australia