NeIn: Telling What You Don't Want

📅 2024-09-09
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-guided image editing models exhibit significant deficiencies in interpreting negation instructions (e.g., “without X”) and lack standardized evaluation benchmarks. Method: We introduce NeIn, the first large-scale negation-instruction image editing dataset (367K samples), and establish the first negation-oriented evaluation benchmark for image editing. Our methodology employs an “automated generation + multi-model collaborative filtering” paradigm: leveraging MS-COCO, we synthesize negative edits using BLIP, a fine-tuned variant of InstructPix2Pix (MagicBrush), and LLaVA-NeXT, followed by quality filtering and a dedicated negation-understanding evaluation protocol. Contribution/Results: Extensive experiments reveal substantial performance degradation of mainstream models under negation instructions, confirming NeIn’s critical role in advancing semantic robustness of vision-language models (VLMs) and enabling rigorous, reproducible assessment of negation comprehension in image editing.

Technology Category

Application Category

📝 Abstract
Negation is a fundamental linguistic concept used by humans to convey information that they do not desire. Despite this, minimal research has focused on negation within text-guided image editing. This lack of research means that vision-language models (VLMs) for image editing may struggle to understand negation, implying that they struggle to provide accurate results. One barrier to achieving human-level intelligence is the lack of a standard collection by which research into negation can be evaluated. This paper presents the first large-scale dataset, Negative Instruction (NeIn), for studying negation within instruction-based image editing. Our dataset comprises 366,957 quintuplets, i.e., source image, original caption, selected object, negative sentence, and target image in total, including 342,775 queries for training and 24,182 queries for benchmarking image editing methods. Specifically, we automatically generate NeIn based on a large, existing vision-language dataset, MS-COCO, via two steps: generation and filtering. During the generation phase, we leverage two VLMs, BLIP and InstructPix2Pix (fine-tuned on MagicBrush dataset), to generate NeIn's samples and the negative clauses that expresses the content of the source image. In the subsequent filtering phase, we apply BLIP and LLaVA-NeXT to remove erroneous samples. Additionally, we introduce an evaluation protocol to assess the negation understanding for image editing models. Extensive experiments using our dataset across multiple VLMs for text-guided image editing demonstrate that even recent state-of-the-art VLMs struggle to understand negative queries.
Problem

Research questions and friction points this paper is trying to address.

Lack of research on negation in text-guided image editing
No standard dataset for evaluating negation understanding in VLMs
Current VLMs struggle with accurate negation-based image editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale dataset for negation in image editing
Automated generation and filtering using VLMs
Evaluation protocol for negation understanding
🔎 Similar Papers
No similar papers found.