Generative Human Geometry Distribution

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses high-fidelity 3D human geometry generation, aiming to synthesize personalized human avatars with fine-grained clothing details and physically plausible cloth–pose interactions. To overcome the limitation of existing methods in capturing population-level geometric variability, we propose, for the first time, a “distribution over geometric distributions” modeling paradigm. Our two-stage generative framework first learns a population-level implicit geometric distribution, then performs conditional sampling to produce pose-coherent, detail-rich individual models. The method integrates implicit geometric parameterization, differentiable 3D rendering, and geometric regularization to jointly model clothing structure and pose-dependent deformations. Extensive experiments demonstrate state-of-the-art performance on both pose-conditional generation and single-view novel-pose reconstruction. Our approach significantly improves geometric fidelity and visual realism, achieving consistent superiority across multiple quantitative metrics including Chamfer distance, normal consistency, and perceptual quality scores.

Technology Category

Application Category

📝 Abstract
Realistic human geometry generation is an important yet challenging task, requiring both the preservation of fine clothing details and the accurate modeling of clothing-pose interactions. Geometry distributions, which can model the geometry of a single human as a distribution, provide a promising representation for high-fidelity synthesis. However, applying geometry distributions for human generation requires learning a dataset-level distribution over numerous individual geometry distributions. To address the resulting challenges, we propose a novel 3D human generative framework that, for the first time, models the distribution of human geometry distributions. Our framework operates in two stages: first, generating the human geometry distribution, and second, synthesizing high-fidelity humans by sampling from this distribution. We validate our method on two tasks: pose-conditioned 3D human generation and single-view-based novel pose generation. Experimental results demonstrate that our approach achieves the best quantitative results in terms of realism and geometric fidelity, outperforming state-of-the-art generative methods.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic human geometry with fine clothing details.
Modeling clothing-pose interactions accurately in 3D human generation.
Learning dataset-level distribution over individual human geometry distributions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel 3D human generative framework
Models distribution of human geometry distributions
Two-stage high-fidelity human synthesis
🔎 Similar Papers
No similar papers found.