From Wardrobe to Canvas: Wardrobe Polyptych LoRA for Part-level Controllable Human Image Generation

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Personalized human image generation struggles with preserving identity and clothing details accurately and consistently across poses and scenes; existing approaches rely on inference-time fine-tuning or large-scale training, incurring high computational cost and poor real-time performance. This paper proposes Wardrobe-MultiLoRA, the first framework enabling part-level controllable generation without any fine-tuning. Leveraging LoRA-based low-rank adaptation to minimize parameter overhead, it introduces spatial reference guidance and a selective subject-region loss to significantly improve text alignment and identity–clothing fidelity under diverse poses and backgrounds. Built upon diffusion models, we curate a dedicated dataset and establish a new benchmark, where our method surpasses state-of-the-art approaches. It supports single-sample training and zero-additional-parameter inference, enabling high-fidelity, identity-consistent full-body image generation.

Technology Category

Application Category

📝 Abstract
Recent diffusion models achieve personalization by learning specific subjects, allowing learned attributes to be integrated into generated images. However, personalized human image generation remains challenging due to the need for precise and consistent attribute preservation (e.g., identity, clothing details). Existing subject-driven image generation methods often require either (1) inference-time fine-tuning with few images for each new subject or (2) large-scale dataset training for generalization. Both approaches are computationally expensive and impractical for real-time applications. To address these limitations, we present Wardrobe Polyptych LoRA, a novel part-level controllable model for personalized human image generation. By training only LoRA layers, our method removes the computational burden at inference while ensuring high-fidelity synthesis of unseen subjects. Our key idea is to condition the generation on the subject's wardrobe and leverage spatial references to reduce information loss, thereby improving fidelity and consistency. Additionally, we introduce a selective subject region loss, which encourages the model to disregard some of reference images during training. Our loss ensures that generated images better align with text prompts while maintaining subject integrity. Notably, our Wardrobe Polyptych LoRA requires no additional parameters at the inference stage and performs generation using a single model trained on a few training samples. We construct a new dataset and benchmark tailored for personalized human image generation. Extensive experiments show that our approach significantly outperforms existing techniques in fidelity and consistency, enabling realistic and identity-preserving full-body synthesis.
Problem

Research questions and friction points this paper is trying to address.

Achieves precise attribute preservation in human image generation
Reduces computational cost for real-time personalized synthesis
Enhances fidelity and consistency using wardrobe-conditioned generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Part-level controllable model using Wardrobe Polyptych LoRA
Training only LoRA layers for efficient inference
Selective subject region loss for better alignment
🔎 Similar Papers
No similar papers found.