LLM-guided Instance-level Image Manipulation with Diffusion U-Net Cross-Attention Maps

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the zero-shot, instance-level editing challenge in text-to-image generation—specifically, modifying specific objects without fine-tuning, masks, bounding boxes, or manual annotations. Methodologically, it introduces a unified framework that jointly leverages a large language model (LLM) to parse edit intentions, an open-vocabulary detector to localize target instances, and synergistic manipulation of diffusion U-Net intermediate activations and cross-attention maps for fine-grained attribute adjustment and spatial repositioning. To our knowledge, this is the first approach to integrate LLM-driven semantic understanding, open-vocabulary detection, and diffusion models’ cross-attention mechanisms into a single editing paradigm, supporting diverse operations including object replacement, relocation, and attribute modification. The method preserves high visual fidelity and spatial consistency even in complex scenes. Both qualitative and quantitative evaluations demonstrate significant improvements over existing training-free editing methods.

Technology Category

Application Category

📝 Abstract
The advancement of text-to-image synthesis has introduced powerful generative models capable of creating realistic images from textual prompts. However, precise control over image attributes remains challenging, especially at the instance level. While existing methods offer some control through fine-tuning or auxiliary information, they often face limitations in flexibility and accuracy. To address these challenges, we propose a pipeline leveraging Large Language Models (LLMs), open-vocabulary detectors, cross-attention maps and intermediate activations of diffusion U-Net for instance-level image manipulation. Our method detects objects mentioned in the prompt and present in the generated image, enabling precise manipulation without extensive training or input masks. By incorporating cross-attention maps, our approach ensures coherence in manipulated images while controlling object positions. Our method enables precise manipulations at the instance level without fine-tuning or auxiliary information such as masks or bounding boxes. Code is available at https://github.com/Palandr123/DiffusionU-NetLLM
Problem

Research questions and friction points this paper is trying to address.

Image Manipulation
Text-to-Image Synthesis
Content Adjustment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion U-Net
Cross Attention Mapping
Large Language Model Guidance
🔎 Similar Papers
No similar papers found.
Andrey Palaev
Andrey Palaev
Samsung Research
Computer VisionDeep LearningGenerative models
Adil Khan
Adil Khan
King's College London
Neuroscience
S
Syed M. Ahsan Kazmi
Department of Computer Science, University of the West of England, BS16 1QY, Bristol, UK