🤖 AI Summary
Existing single-image style transfer methods struggle with fine-grained visual feature disentanglement and high-fidelity reproduction for embroidery—a textile art characterized by intricate stitch structures and material-specific properties. To address this, we propose a disentangled diffusion-based framework for embroidery style customization. First, we construct an image-analogy structure to explicitly separate content and style representations. Second, we design a two-stage contrastive LoRA modulation mechanism coupled with self-knowledge distillation, enabling precise style-content disentanglement from only one reference image. Built upon a pre-trained diffusion model, our method integrates LoRA-based low-rank adaptation, contrastive learning, and distillation into an end-to-end transfer pipeline. Evaluated on a newly curated embroidery benchmark, our approach significantly outperforms state-of-the-art methods. Moreover, it demonstrates strong generalization across diverse tasks—including artistic style transfer, line-art coloring, and appearance transfer—highlighting its robustness and versatility.
📝 Abstract
Diffusion models have significantly advanced image manipulation techniques, and their ability to generate photorealistic images is beginning to transform retail workflows, particularly in presale visualization. Beyond artistic style transfer, the capability to perform fine-grained visual feature transfer is becoming increasingly important. Embroidery is a textile art form characterized by intricate interplay of diverse stitch patterns and material properties, which poses unique challenges for existing style transfer methods. To explore the customization for such fine-grained features, we propose a novel contrastive learning framework that disentangles fine-grained style and content features with a single reference image, building on the classic concept of image analogy. We first construct an image pair to define the target style, and then adopt a similarity metric based on the decoupled representations of pretrained diffusion models for style-content separation. Subsequently, we propose a two-stage contrastive LoRA modulation technique to capture fine-grained style features. In the first stage, we iteratively update the whole LoRA and the selected style blocks to initially separate style from content. In the second stage, we design a contrastive learning strategy to further decouple style and content through self-knowledge distillation. Finally, we build an inference pipeline to handle image or text inputs with only the style blocks. To evaluate our method on fine-grained style transfer, we build a benchmark for embroidery customization. Our approach surpasses prior methods on this task and further demonstrates strong generalization to three additional domains: artistic style transfer, sketch colorization, and appearance transfer.