🤖 AI Summary
Existing language-guided image editing methods suffer from three key limitations: cumbersome prompt engineering, reliance on model fine-tuning, and insufficient local controllability. To address these, we propose a training-free, progressive exemplar-driven editing framework that performs “implicit surgery” within the latent space of pre-trained diffusion models. Our approach leverages multi-exemplar alignment and spatial-mask-guided progressive feature injection to enable pixel-level and region-level fine-grained control, as well as dynamic fusion of arbitrary numbers of real-image exemplars. Crucially, it operates solely via inference-time latent-space manipulations—requiring no architectural modification or parameter updates—and is compatible with mainstream open-source text-to-image diffusion models. Extensive experiments demonstrate significant improvements in both quantitative metrics and human evaluations, achieving high-fidelity, zero-training, professional-grade image editing with enhanced controllability and efficiency.
📝 Abstract
Recent advancements in language-guided diffusion models for image editing are often bottle-necked by cumbersome prompt engineering to precisely articulate desired changes. An intuitive alternative calls on guidance from in-the-wild image exemplars to help users bring their imagined edits to life. Contemporary exemplar-based editing methods shy away from leveraging the rich latent space learnt by pre-existing large text-to-image (TTI) models and fall back on training with curated objective functions to achieve the task. Though somewhat effective, this demands significant computational resources and lacks compatibility with diverse base models and arbitrary exemplar count. On further investigation, we also find that these techniques restrict user control to only applying uniform global changes over the entire edited region. In this paper, we introduce a novel framework for progressive exemplar-driven editing with off-the-shelf diffusion models, dubbed PIXELS, to enable customization by providing granular control over edits, allowing adjustments at the pixel or region level. Our method operates solely during inference to facilitate imitative editing, enabling users to draw inspiration from a dynamic number of reference images, or multimodal prompts, and progressively incorporate all the desired changes without retraining or fine-tuning existing TTI models. This capability of fine-grained control opens up a range of new possibilities, including selective modification of individual objects and specifying gradual spatial changes. We demonstrate that PIXELS delivers high-quality edits efficiently, leading to a notable improvement in quantitative metrics as well as human evaluation. By making high-quality image editing more accessible, PIXELS has the potential to enable professional-grade edits to a wider audience with the ease of using any open-source image generation model.