🤖 AI Summary
To address the challenge in local image editing—namely, reliance on precise masks or complex object localization, which hinders usability for non-expert users—this paper proposes a click-driven lightweight editing framework. Given only a single click and a text prompt, the method automatically generates a semantically coherent editing region at the clicked location and seamlessly inserts new content. Technically, it introduces a novel dynamic mask generation mechanism that requires no pre-trained segmentation models or fine-tuning. By integrating a blended latent diffusion (BLD) architecture with a mask-guided CLIP semantic loss, the approach enables progressive, click-triggered mask expansion. Experiments demonstrate state-of-the-art performance across multiple automated metrics; human evaluation further confirms high visual quality and robustness. The method significantly reduces user interaction overhead while maintaining editing fidelity and flexibility.
📝 Abstract
Recent advancements in generative models have revolutionized image generation and editing, making these tasks accessible to non-experts. This paper focuses on local image editing, particularly the task of adding new content to a loosely specified area. Existing methods often require a precise mask or a detailed description of the location, which can be cumbersome and prone to errors. We propose Click2Mask, a novel approach that simplifies the local editing process by requiring only a single point of reference (in addition to the content description). A mask is dynamically grown around this point during a Blended Latent Diffusion (BLD) process, guided by a masked CLIP-based semantic loss. Click2Mask surpasses the limitations of segmentation-based and fine-tuning dependent methods, offering a more user-friendly and contextually accurate solution. Our experiments demonstrate that Click2Mask not only minimizes user effort but also enables competitive or superior local image manipulations compared to SoTA methods, according to both human judgement and automatic metrics. Key contributions include the simplification of user input, the ability to freely add objects unconstrained by existing segments, and the integration potential of our dynamic mask approach within other editing methods.