🤖 AI Summary
To address the domain knowledge deficiency of multimodal large language models (MLLMs) in visual question answering (VQA) and the loss of fine-grained visual details caused by unimodal retrieval, this paper proposes a fine-grained retrieval-augmented generation (RAG) framework. Our method constructs fine-grained multimodal knowledge units—each comprising an entity-aligned image region and its corresponding textual description—and introduces a knowledge calibration chain to enhance robustness in cross-modal reasoning. Furthermore, it jointly optimizes text-image embedding for precise multimodal vector retrieval and answer generation. Evaluated on the KB-VQA benchmark, our approach achieves substantial improvements over state-of-the-art methods—up to +10% absolute accuracy—particularly excelling on complex VQA tasks requiring meticulous visual-semantic alignment.
📝 Abstract
Visual Question Answering (VQA) focuses on providing answers to natural language questions by utilizing information from images. Although cutting-edge multimodal large language models (MLLMs) such as GPT-4o achieve strong performance on VQA tasks, they frequently fall short in accessing domain-specific or the latest knowledge. To mitigate this issue, retrieval-augmented generation (RAG) leveraging external knowledge bases (KBs), referred to as KB-VQA, emerges as a promising approach. Nevertheless, conventional unimodal retrieval techniques, which translate images into textual descriptions, often result in the loss of critical visual details. This study presents fine-grained knowledge units, which merge textual snippets with entity images stored in vector databases. Furthermore, we introduce a knowledge unit retrieval-augmented generation framework (KU-RAG) that integrates fine-grained retrieval with MLLMs. The proposed KU-RAG framework ensures precise retrieval of relevant knowledge and enhances reasoning capabilities through a knowledge correction chain. Experimental findings demonstrate that our approach significantly boosts the performance of leading KB-VQA methods, achieving improvements of up to 10%.