🤖 AI Summary
Current LVLM evaluation and defense strategies prioritize ignoring image-text to enhance robustness, yet this compromises essential multimodal reasoning—such as jointly recognizing persons and interpreting traffic signs—in real-world scenarios. Method: We propose “Read-or-Ignore Visual Question Answering (RIO-VQA)”, a novel task formalizing context-adaptive decisions on whether to read text within images. To support it, we introduce RIO-Bench—the first benchmark featuring counterfactual paired data and a “read-or-ignore” selective text utilization paradigm—and design the first data-driven adaptive defense framework integrating counterfactual image generation, selective attention, and robust VQA modeling. Contribution/Results: Our analysis exposes a fundamental misalignment between existing evaluation protocols and practical multimodal reasoning requirements. Experiments show that state-of-the-art LVLMs and defenses fail to balance robustness and text understanding; our approach significantly improves RIO-VQA accuracy, establishing a new pathway toward reliable multimodal reasoning.
📝 Abstract
Large vision-language models (LVLMs) are vulnerable to typographic attacks, where misleading text within an image overrides visual understanding. Existing evaluation protocols and defenses, largely focused on object recognition, implicitly encourage ignoring text to achieve robustness; however, real-world scenarios often require joint reasoning over both objects and text (e.g., recognizing pedestrians while reading traffic signs). To address this, we introduce a novel task, Read-or-Ignore VQA (RIO-VQA), which formalizes selective text use in visual question answering (VQA): models must decide, from context, when to read text and when to ignore it. For evaluation, we present the Read-or-Ignore Benchmark (RIO-Bench), a standardized dataset and protocol that, for each real image, provides same-scene counterfactuals (read / ignore) by varying only the textual content and question type. Using RIO-Bench, we show that strong LVLMs and existing defenses fail to balance typographic robustness and text-reading capability, highlighting the need for improved approaches. Finally, RIO-Bench enables a novel data-driven defense that learns adaptive selective text use, moving beyond prior non-adaptive, text-ignoring defenses. Overall, this work reveals a fundamental misalignment between the existing evaluation scope and real-world requirements, providing a principled path toward reliable LVLMs. Our Project Page is at https://turingmotors.github.io/rio-vqa/.