🤖 AI Summary
To address the challenges of scarce annotated data and inaccurate cross-modal semantic alignment between visual content and external knowledge in zero-shot visual question answering (ZS-VQA), this paper proposes a collaborative reasoning framework integrating knowledge graphs (KGs) and large language models (LLMs). Without fine-tuning or training samples, the LLM first parses image captions and question semantics, while the KG dynamically retrieves and expands relevant entity–relation subgraphs to bridge vision–language–knowledge modalities. Furthermore, a multi-source information fusion mechanism and an adaptive weighted loss optimization strategy are introduced to enhance answer generation accuracy and robustness. The method achieves state-of-the-art performance on two major benchmarks—VQAv2 and OK-VQA. The source code and evaluation datasets are publicly released.
📝 Abstract
Zero-shot visual question answering (ZS-VQA), an emerged critical research area, intends to answer visual questions without providing training samples. Existing research in ZS-VQA has proposed to leverage knowledge graphs or large language models (LLMs), respectively, as external information sources to help VQA model comprehend images and questions. However, LLMs often struggle in accurately interpreting specific question meanings. Meanwhile, although knowledge graph has rich entity relationships, it is challenging to effectively connect entities to individual image content for visual question answers. In this paper, we propose a novel design to combine knowledge graph and LLMs for zero-shot visual question answer. Our approach uses LLMs' powerful understanding capabilities to accurately interpret image content through a strategic question search mechanism. Meanwhile, the knowledge graph is used to expand and connect users' queries to the image content for better visual question answering. An optimization algorithm is further used to determine the optimal weights for the loss functions derived from different information sources, towards a globally optimal set of candidate answers. Experimental results on two benchmark datasets demonstrate that our model achieves state-of-the-art (SOTA) performance. Both source code and benchmark data will be released for public access.