AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models

📅 2024-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing virtual try-on methods struggle with multi-garment coordinated styling and text-driven fine-grained customization, limiting their applicability across diverse real-world scenarios. To address this, we propose GarmentsNet-DressingNet, a dual-network architecture that introduces garment-specific feature extraction, an adaptive Dressing-Attention mechanism, and instance-level garment localization learning—enabling high texture fidelity and strong text–image alignment. Built upon a latent diffusion model, our framework integrates parallel garment encoding, attention enhancement, instance localization supervision, and garment-enhanced texture learning. On multi-garment synthesis tasks, it achieves state-of-the-art performance. Moreover, the architecture supports plug-and-play community-based control extensions, significantly improving generation diversity and controllability while preserving semantic consistency and visual realism.

Technology Category

Application Category

📝 Abstract
Recent advances in garment-centric image generation from text and image prompts based on diffusion models are impressive. However, existing methods lack support for various combinations of attire, and struggle to preserve the garment details while maintaining faithfulness to the text prompts, limiting their performance across diverse scenarios. In this paper, we focus on a new task, i.e., Multi-Garment Virtual Dressing, and we propose a novel AnyDressing method for customizing characters conditioned on any combination of garments and any personalized text prompts. AnyDressing comprises two primary networks named GarmentsNet and DressingNet, which are respectively dedicated to extracting detailed clothing features and generating customized images. Specifically, we propose an efficient and scalable module called Garment-Specific Feature Extractor in GarmentsNet to individually encode garment textures in parallel. This design prevents garment confusion while ensuring network efficiency. Meanwhile, we design an adaptive Dressing-Attention mechanism and a novel Instance-Level Garment Localization Learning strategy in DressingNet to accurately inject multi-garment features into their corresponding regions. This approach efficiently integrates multi-garment texture cues into generated images and further enhances text-image consistency. Additionally, we introduce a Garment-Enhanced Texture Learning strategy to improve the fine-grained texture details of garments. Thanks to our well-craft design, AnyDressing can serve as a plug-in module to easily integrate with any community control extensions for diffusion models, improving the diversity and controllability of synthesized images. Extensive experiments show that AnyDressing achieves state-of-the-art results.
Problem

Research questions and friction points this paper is trying to address.

Virtual试wear
Clothing Combination
Text-to-Image Detail
Innovation

Methods, ideas, or system contributions that make the work stand out.

Garment-Specific Feature Extractor
Dressing-Attention Mechanism
Instance-Level Garment Localization Learning
🔎 Similar Papers
No similar papers found.
X
Xinghui Li
ByteDance
Q
Qichao Sun
ByteDance
P
Pengze Zhang
ByteDance
Fulong Ye
Fulong Ye
ByteDance
Vision-Language PretrainGenerative modelsDiffusion Models
Z
Zhichao Liao
Tsinghua University
Wanquan Feng
Wanquan Feng
USTC
computer vision
S
Songtao Zhao
ByteDance
Qian He
Qian He
ByteDance