SMPL-GPTexture: Dual-View 3D Human Texture Estimation using Text-to-Image Generation Models

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address artifacts, structural distortions, and detail loss in 3D human texture generation under unpaired frontal/backal view conditions, this paper proposes a text-driven dual-view texture generation framework. Methodologically, it first leverages text prompts to guide Stable Diffusion for generating semantically consistent frontal and back views; then performs geometry-aligned UV mapping via SMPL mesh reconstruction and differentiable inverse rasterization; finally applies a diffusion model in UV space for cross-view inpainting and fusion. Key contributions include: (1) the first end-to-end paradigm for text-to-dual-view texture generation; (2) a novel inverse-rasterization-based UV projection mechanism that avoids topological inconsistencies inherent in implicit modeling; and (3) high-fidelity reconstruction of full 1024×1024 texture maps, significantly improving detail realism and geometric consistency—especially on occluded and dorsal regions.

Technology Category

Application Category

📝 Abstract
Generating high-quality, photorealistic textures for 3D human avatars remains a fundamental yet challenging task in computer vision and multimedia field. However, real paired front and back images of human subjects are rarely available with privacy, ethical and cost of acquisition, which restricts scalability of the data. Additionally, learning priors from image inputs using deep generative models, such as GANs or diffusion models, to infer unseen regions such as the human back often leads to artifacts, structural inconsistencies, or loss of fine-grained detail. To address these issues, we present SMPL-GPTexture (skinned multi-person linear model - general purpose Texture), a novel pipeline that takes natural language prompts as input and leverages a state-of-the-art text-to-image generation model to produce paired high-resolution front and back images of a human subject as the starting point for texture estimation. Using the generated paired dual-view images, we first employ a human mesh recovery model to obtain a robust 2D-to-3D SMPL alignment between image pixels and the 3D model's UV coordinates for each views. Second, we use an inverted rasterization technique that explicitly projects the observed colour from the input images into the UV space, thereby producing accurate, complete texture maps. Finally, we apply a diffusion-based inpainting module to fill in the missing regions, and the fusion mechanism then combines these results into a unified full texture map. Extensive experiments shows that our SMPL-GPTexture can generate high resolution texture aligned with user's prompts.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic 3D human textures from limited dual-view images
Reducing artifacts in inferred unseen regions like human back
Aligning high-resolution textures with user-provided text prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages text-to-image model for dual-view generation
Uses human mesh recovery for 2D-to-3D alignment
Applies diffusion inpainting for complete texture fusion
🔎 Similar Papers
No similar papers found.