Devil is in the Detail: Towards Injecting Fine Details of Image Prompt in Image Generation via Conflict-free Guidance and Stratified Attention

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image diffusion models struggle to faithfully reconstruct fine-grained details—such as texture—from image prompts. To address this, we propose a prompt-image–guided refinement method: first, we design a conflict-free classifier-free guidance mechanism to mitigate interference among multiple conditional signals; second, we introduce a hierarchical attention architecture that explicitly fuses key-value pairs from both the prompt and generated images, enabling detail alignment and photorealism balance across semantic levels. Our approach operates within the standard diffusion framework, requiring modifications only to the self-attention modules—no additional training overhead is incurred. Experiments across three image generation tasks demonstrate that our method significantly improves fidelity to prompt-image details while preserving overall generation quality, achieving superior comprehensive performance compared to existing image-prompt–guided models.

Technology Category

Application Category

📝 Abstract
While large-scale text-to-image diffusion models enable the generation of high-quality, diverse images from text prompts, these prompts struggle to capture intricate details, such as textures, preventing the user intent from being reflected. This limitation has led to efforts to generate images conditioned on user-provided images, referred to as image prompts. Recent work modifies the self-attention mechanism to impose image conditions in generated images by replacing or concatenating the keys and values from the image prompt. This enables the self-attention layer to work like a cross-attention layer, generally used to incorporate text prompts. In this paper, we identify two common issues in existing methods of modifying self-attention to generate images that reflect the details of image prompts. First, existing approaches neglect the importance of image prompts in classifier-free guidance. Specifically, current methods use image prompts as both desired and undesired conditions in classifier-free guidance, causing conflicting signals. To resolve this, we propose conflict-free guidance by using image prompts only as desired conditions, ensuring that the generated image faithfully reflects the image prompt. In addition, we observe that the two most common self-attention modifications involve a trade-off between the realism of the generated image and alignment with the image prompt. Specifically, selecting more keys and values from the image prompt improves alignment, while selecting more from the generated image enhances realism. To balance both, we propose an new self-attention modification method, Stratified Attention to jointly use keys and values from both images rather than selecting between them. Through extensive experiments across three image generation tasks, we show that the proposed method outperforms existing image-prompting models in faithfully reflecting the image prompt.
Problem

Research questions and friction points this paper is trying to address.

Resolve conflicting signals in image prompt guidance
Balance realism and alignment in self-attention modifications
Improve fine detail injection in image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conflict-free guidance for image prompts
Stratified Attention balances realism and alignment
Modified self-attention for detailed image generation
🔎 Similar Papers
No similar papers found.