A Continuous-Time Consistency Model for 3D Point Cloud Generation

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations in point-cloud-based 3D shape generation—namely, reliance on discrete diffusion steps, pre-trained teacher models, or latent-space encoding. We propose ConTiCoM-3D, the first continuous-time consistency model operating directly in the raw point-cloud space. Our method eliminates discretization and external supervision by leveraging a time-conditioned neural network, a TrigFlow-inspired continuous noise schedule, and a Chamfer-distance-driven geometric loss, enabling stable training and end-to-end optimization over high-dimensional point sets. Crucially, it avoids Jacobian-vector product computations and operates entirely in the native point space, supporting efficient one- or two-step inference. Evaluated on ShapeNet, ConTiCoM-3D matches or surpasses state-of-the-art diffusion and latent-space consistency models in both generation quality and inference speed. This demonstrates the effectiveness and practicality of continuous-time modeling for scalable 3D generative learning.

Technology Category

Application Category

📝 Abstract
Fast and accurate 3D shape generation from point clouds is essential for applications in robotics, AR/VR, and digital content creation. We introduce ConTiCoM-3D, a continuous-time consistency model that synthesizes 3D shapes directly in point space, without discretized diffusion steps, pre-trained teacher models, or latent-space encodings. The method integrates a TrigFlow-inspired continuous noise schedule with a Chamfer Distance-based geometric loss, enabling stable training on high-dimensional point sets while avoiding expensive Jacobian-vector products. This design supports efficient one- to two-step inference with high geometric fidelity. In contrast to previous approaches that rely on iterative denoising or latent decoders, ConTiCoM-3D employs a time-conditioned neural network operating entirely in continuous time, thereby achieving fast generation. Experiments on the ShapeNet benchmark show that ConTiCoM-3D matches or outperforms state-of-the-art diffusion and latent consistency models in both quality and efficiency, establishing it as a practical framework for scalable 3D shape generation.
Problem

Research questions and friction points this paper is trying to address.

Generating 3D shapes from point clouds efficiently
Avoiding discretized diffusion steps and latent encodings
Achieving high geometric fidelity with fast inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous-time consistency model without diffusion steps
Integrates continuous noise schedule with geometric loss
Employs time-conditioned neural network in continuous time
🔎 Similar Papers
No similar papers found.