🤖 AI Summary
To address the core challenges in drag-based image editing—namely, distortion of target regions and difficulty in preserving natural image manifold consistency—this paper introduces DragFlow, the first drag-editing framework integrating strong generative priors from DiT-based models (e.g., FLUX). Methodologically, DragFlow replaces conventional point-level supervision with region-wise affine transformations, employs IP-Adapter to maintain personalized content fidelity, enforces hard constraints via gradient masking, and leverages multimodal large language models to interpret ambiguous editing instructions. Evaluated on DragBench-DR and the newly introduced ReD Bench, DragFlow achieves new state-of-the-art performance across three key metrics: editing accuracy, target naturalness, and background fidelity. The framework demonstrates superior robustness and visual coherence compared to prior approaches. Code and benchmark datasets will be publicly released.
📝 Abstract
Drag-based image editing has long suffered from distortions in the target region, largely because the priors of earlier base models, Stable Diffusion, are insufficient to project optimized latents back onto the natural image manifold. With the shift from UNet-based DDPMs to more scalable DiT with flow matching (e.g., SD3.5, FLUX), generative priors have become significantly stronger, enabling advances across diverse editing tasks. However, drag-based editing has yet to benefit from these stronger priors. This work proposes the first framework to effectively harness FLUX's rich prior for drag-based editing, dubbed DragFlow, achieving substantial gains over baselines. We first show that directly applying point-based drag editing to DiTs performs poorly: unlike the highly compressed features of UNets, DiT features are insufficiently structured to provide reliable guidance for point-wise motion supervision. To overcome this limitation, DragFlow introduces a region-based editing paradigm, where affine transformations enable richer and more consistent feature supervision. Additionally, we integrate pretrained open-domain personalization adapters (e.g., IP-Adapter) to enhance subject consistency, while preserving background fidelity through gradient mask-based hard constraints. Multimodal large language models (MLLMs) are further employed to resolve task ambiguities. For evaluation, we curate a novel Region-based Dragging benchmark (ReD Bench) featuring region-level dragging instructions. Extensive experiments on DragBench-DR and ReD Bench show that DragFlow surpasses both point-based and region-based baselines, setting a new state-of-the-art in drag-based image editing. Code and datasets will be publicly available upon publication.