🤖 AI Summary
Existing VQA debiasing methods struggle to model deep semantic correlations between questions and images and overlook dynamic assessment of input relevance during inference. This paper proposes a question–image relational learning framework targeting biases arising from question–image independence. Our contributions are threefold: (1) a novel Negative Instance Generation (NIG) module that employs generative self-supervision and adversarial synthesis to automatically produce highly irrelevant question–image pairs, thereby strengthening relevance modeling; (2) an Irrelevant Sample Identification (ISI) module that dynamically detects and filters irrelevant inputs at inference time; and (3) a model-agnostic, plug-and-play relevance evaluation metric with standardized adaptation interfaces. Evaluated on VQA-CPv2, our method achieves state-of-the-art performance, significantly reducing bias-induced errors. It is compatible with diverse backbone architectures—including LXMERT, ViLBERT, and UNITER—and demonstrates strong generalization across models and settings.
📝 Abstract
Existing debiasing approaches in Visual Question Answering (VQA) primarily focus on enhancing visual learning, integrating auxiliary models, or employing data augmentation strategies. However, these methods exhibit two major drawbacks. First, current debiasing techniques fail to capture the superior relation between images and texts because prevalent learning frameworks do not enable models to extract deeper correlations from highly contrasting samples. Second, they do not assess the relevance between the input question and image during inference, as no prior work has examined the degree of input relevance in debiasing studies. Motivated by these limitations, we propose a novel framework, Optimized Question-Image Relation Learning (QIRL), which employs a generation-based self-supervised learning strategy. Specifically, two modules are introduced to address the aforementioned issues. The Negative Image Generation (NIG) module automatically produces highly irrelevant question-image pairs during training to enhance correlation learning, while the Irrelevant Sample Identification (ISI) module improves model robustness by detecting and filtering irrelevant inputs, thereby reducing prediction errors. Furthermore, to validate our concept of reducing output errors through filtering unrelated question-image inputs, we propose a specialized metric to evaluate the performance of the ISI module. Notably, our approach is model-agnostic and can be integrated with various VQA models. Extensive experiments on VQA-CPv2 and VQA-v2 demonstrate the effectiveness and generalization ability of our method. Among data augmentation strategies, our approach achieves state-of-the-art results.