🤖 AI Summary
Existing multimodal model editing (MMED) methods rely on low-similarity or random inputs for evaluation, which often masks overfitting and induces “transient visual blindness” in visual question answering (VQA)—a phenomenon where models excessively depend on edited text while disregarding visual inputs. This work formally defines and characterizes transient visual blindness for the first time. We propose a comprehensive locality evaluation framework covering three distinct scenarios: random images, image-absent inputs, and consistent images. To enforce cross-modal balance, we design an adversarial loss mechanism, integrated with dynamic VQA evaluation and token-level analysis for fine-grained quantification of editing effects. Experiments demonstrate that our method improves locality by 17% on average, significantly mitigating transient visual blindness and outperforming state-of-the-art baselines across multiple benchmarks.
📝 Abstract
Multimodal Model Editing (MMED) aims to correct erroneous knowledge in multimodal models. Existing evaluation methods, adapted from textual model editing, overstate success by relying on low-similarity or random inputs, obscure overfitting. We propose a comprehensive locality evaluation framework, covering three key dimensions: random-image locality, no-image locality, and consistent-image locality, operationalized through seven distinct data types, enabling a detailed and structured analysis of multimodal edits. We introduce De-VQA, a dynamic evaluation for visual question answering, uncovering a phenomenon we term transient blindness, overfitting to edit-similar text while ignoring visuals. Token analysis shows edits disproportionately affect textual tokens. We propose locality-aware adversarial losses to balance cross-modal representations. Empirical results demonstrate that our approach consistently outperforms existing baselines, reducing transient blindness and improving locality by 17% on average.