🤖 AI Summary
This study addresses the cognitive reliability of vision-language models in visual question answering (VQA), specifically their ability to recognize knowledge boundaries and abstain from answering unanswerable questions. We identify three canonical unanswerable categories—mixed entities, unconventional/impossible scenes, and fictional/nonexistent characters—and introduce VisionTrap, the first systematic benchmark for this task. VisionTrap employs GANs and diffusion models to synthesize non-photorealistic images and features logically structured, unanswerable questions. Crucially, we propose “response suppression” as a novel evaluation dimension to quantify model abstention capability. Extensive experiments reveal that state-of-the-art multimodal foundation models exhibit strong answer-assertion biases and critically lack awareness of their epistemic limitations. Our work establishes a new benchmark, introduces a principled metric for abstention, and offers a fresh perspective on evaluating trustworthy AI systems.
📝 Abstract
Visual Question Answering (VQA) has been a widely studied topic, with extensive research focusing on how VLMs respond to answerable questions based on real-world images. However, there has been limited exploration of how these models handle unanswerable questions, particularly in cases where they should abstain from providing a response. This research investigates VQA performance on unrealistically generated images or asking unanswerable questions, assessing whether models recognize the limitations of their knowledge or attempt to generate incorrect answers. We introduced a dataset, VisionTrap, comprising three categories of unanswerable questions across diverse image types: (1) hybrid entities that fuse objects and animals, (2) objects depicted in unconventional or impossible scenarios, and (3) fictional or non-existent figures. The questions posed are logically structured yet inherently unanswerable, testing whether models can correctly recognize their limitations. Our findings highlight the importance of incorporating such questions into VQA benchmarks to evaluate whether models tend to answer, even when they should abstain.