🤖 AI Summary
Existing instruction-guided image editing methods suffer from three key limitations: poor generalization across editing skills, noisy training data (relying on simplistic filtering like CLIP-score), and support only for low-resolution images with fixed aspect ratios. To address these, we propose a general-purpose image editing framework tailored for real-world scenarios—first enabling unified modeling of seven diverse editing tasks and native support for arbitrary aspect ratios. Methodologically, we introduce an expert-model collaborative supervision paradigm; employ GPT-4o–driven importance sampling to enhance dataset quality; design a lightweight EditNet architecture to improve editing fidelity; and adopt multi-scale and multi-aspect-ratio diffusion training. Our framework achieves state-of-the-art performance across multi-task and multi-aspect-ratio benchmarks, outperforming prior work in both automated and human evaluations. The code, datasets, and models are publicly released.
📝 Abstract
Instruction-guided image editing methods have demonstrated significant potential by training diffusion models on automatically synthesized or manually annotated image editing pairs. However, these methods remain far from practical, real-life applications. We identify three primary challenges contributing to this gap. Firstly, existing models have limited editing skills due to the biased synthesis process. Secondly, these methods are trained with datasets with a high volume of noise and artifacts. This is due to the application of simple filtering methods like CLIP-score. Thirdly, all these datasets are restricted to a single low resolution and fixed aspect ratio, limiting the versatility to handle real-world use cases. In this paper, we present omniedit, which is an omnipotent editor to handle seven different image editing tasks with any aspect ratio seamlessly. Our contribution is in four folds: (1) omniedit is trained by utilizing the supervision from seven different specialist models to ensure task coverage. (2) we utilize importance sampling based on the scores provided by large multimodal models (like GPT-4o) instead of CLIP-score to improve the data quality. (3) we propose a new editing architecture called EditNet to greatly boost the editing success rate, (4) we provide images with different aspect ratios to ensure that our model can handle any image in the wild. We have curated a test set containing images of different aspect ratios, accompanied by diverse instructions to cover different tasks. Both automatic evaluation and human evaluations demonstrate that omniedit can significantly outperform all the existing models. Our code, dataset and model will be available at https://tiger-ai-lab.github.io/OmniEdit/