Single View Garment Reconstruction Using Diffusion Mapping Via Pattern Coordinates

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Single-image-driven 3D geometric reconstruction of loose clothing remains highly challenging. This paper introduces the first diffusion-based mapping framework grounded in pattern coordinates, jointly modeling image pixels, UV coordinates, and 3D mesh geometry. We propose an Implicit Segmentation Pattern (ISP) representation in UV space to encode garment structure and integrate it with a generative diffusion model to learn geometric priors. A cross-domain differentiable mapping network is designed to enable end-to-end optimization from image → UV segmentation pattern → 3D mesh. Crucially, our method is trained exclusively on synthetic cloth data yet generalizes robustly to real-world images. Quantitative and qualitative evaluations demonstrate significant improvements over state-of-the-art methods on both tight-fitting and loose garments. Reconstructed geometries are physically plausible, rich in fine-scale detail, and support downstream applications including pose retargeting and texture editing.

Technology Category

Application Category

📝 Abstract
Reconstructing 3D clothed humans from images is fundamental to applications like virtual try-on, avatar creation, and mixed reality. While recent advances have enhanced human body recovery, accurate reconstruction of garment geometry -- especially for loose-fitting clothing -- remains an open challenge. We present a novel method for high-fidelity 3D garment reconstruction from single images that bridges 2D and 3D representations. Our approach combines Implicit Sewing Patterns (ISP) with a generative diffusion model to learn rich garment shape priors in a 2D UV space. A key innovation is our mapping model that establishes correspondences between 2D image pixels, UV pattern coordinates, and 3D geometry, enabling joint optimization of both 3D garment meshes and the corresponding 2D patterns by aligning learned priors with image observations. Despite training exclusively on synthetically simulated cloth data, our method generalizes effectively to real-world images, outperforming existing approaches on both tight- and loose-fitting garments. The reconstructed garments maintain physical plausibility while capturing fine geometric details, enabling downstream applications including garment retargeting and texture manipulation.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing 3D garments from single images
Handling loose-fitting clothing accurately
Bridging 2D and 3D garment representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines ISP with diffusion model
Maps 2D pixels to UV coordinates
Optimizes 3D meshes via 2D alignment
🔎 Similar Papers
No similar papers found.
R
Ren Li
École Polytechnique Fédérale de Lausanne, Switzerland
C
Cong Cao
Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates
C
Corentin Dumery
École Polytechnique Fédérale de Lausanne, Switzerland
Yingxuan You
Yingxuan You
EPFL
Computer Vision3D Virtual Human
H
Hao Li
Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates
Pascal Fua
Pascal Fua
Professor Computer Science, EPFL
Computer VisionMachine LearningComputer Asisted Eng.Biomedical Imaging