🤖 AI Summary
Multi-ID customization aims to generate semantically coherent images that integrate multiple identity features, yet suffers from two key challenges: “copy-paste” artifacts and weak text controllability. This paper proposes a fine-tuning-free, plug-and-play framework built upon pre-trained single-ID diffusion models. First, we design an ID-decoupled cross-attention mechanism to isolate and precisely inject distinct identity features. Second, we introduce depth-map-guided spatial layout control coupled with localized text prompts to enhance structural and semantic alignment. Third, we extend self-attention to improve global consistency. All modifications are implemented solely via attention injection—no architectural changes or model retraining. Evaluated on our newly constructed IDBench benchmark, our method outperforms existing training-dependent multi-ID customization approaches in image quality, identity fidelity, and text-image alignment.
📝 Abstract
Multi-ID customization is an interesting topic in computer vision and attracts considerable attention recently. Given the ID images of multiple individuals, its purpose is to generate a customized image that seamlessly integrates them while preserving their respective identities. Compared to single-ID customization, multi-ID customization is much more difficult and poses two major challenges. First, since the multi-ID customization model is trained to reconstruct an image from the cropped person regions, it often encounters the copy-paste issue during inference, leading to lower quality. Second, the model also suffers from inferior text controllability. The generated result simply combines multiple persons into one image, regardless of whether it is aligned with the input text. In this work, we propose MultiID to tackle this challenging task in a training-free manner. Since the existing single-ID customization models have less copy-paste issue, our key idea is to adapt these models to achieve multi-ID customization. To this end, we present an ID-decoupled cross-attention mechanism, injecting distinct ID embeddings into the corresponding image regions and thus generating multi-ID outputs. To enhance the generation controllability, we introduce three critical strategies, namely the local prompt, depth-guided spatial control, and extended self-attention, making the results more consistent with the text prompts and ID images. We also carefully build a benchmark, called IDBench, for evaluation. The extensive qualitative and quantitative results demonstrate the effectiveness of MultiID in solving the aforementioned two challenges. Its performance is comparable or even better than the training-based multi-ID customization methods.