Combining Knowledge Graph and LLMs for Enhanced Zero-shot Visual Question Answering

📅 2025-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of scarce annotated data and inaccurate cross-modal semantic alignment between visual content and external knowledge in zero-shot visual question answering (ZS-VQA), this paper proposes a collaborative reasoning framework integrating knowledge graphs (KGs) and large language models (LLMs). Without fine-tuning or training samples, the LLM first parses image captions and question semantics, while the KG dynamically retrieves and expands relevant entity–relation subgraphs to bridge vision–language–knowledge modalities. Furthermore, a multi-source information fusion mechanism and an adaptive weighted loss optimization strategy are introduced to enhance answer generation accuracy and robustness. The method achieves state-of-the-art performance on two major benchmarks—VQAv2 and OK-VQA. The source code and evaluation datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Zero-shot visual question answering (ZS-VQA), an emerged critical research area, intends to answer visual questions without providing training samples. Existing research in ZS-VQA has proposed to leverage knowledge graphs or large language models (LLMs), respectively, as external information sources to help VQA model comprehend images and questions. However, LLMs often struggle in accurately interpreting specific question meanings. Meanwhile, although knowledge graph has rich entity relationships, it is challenging to effectively connect entities to individual image content for visual question answers. In this paper, we propose a novel design to combine knowledge graph and LLMs for zero-shot visual question answer. Our approach uses LLMs' powerful understanding capabilities to accurately interpret image content through a strategic question search mechanism. Meanwhile, the knowledge graph is used to expand and connect users' queries to the image content for better visual question answering. An optimization algorithm is further used to determine the optimal weights for the loss functions derived from different information sources, towards a globally optimal set of candidate answers. Experimental results on two benchmark datasets demonstrate that our model achieves state-of-the-art (SOTA) performance. Both source code and benchmark data will be released for public access.
Problem

Research questions and friction points this paper is trying to address.

Zero-Shot Visual Question Answering
Knowledge Graphs
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Graphs
Large Language Models
Zero-shot Visual Question Answering
🔎 Similar Papers
No similar papers found.
Q
Qian Tao
South China University of Technology, Guangzhou, China
X
Xiaoyang Fan
South China University of Technology, Guangzhou, China
Y
Yong Xu
South China University of Technology, Guangzhou, China
X
Xingquan Zhu
Florida Atlantic University, Boca Raton, Florida, USA
Yufei Tang
Yufei Tang
Center Director & Associate Professor, Florida Atlantic University
Machine LearningPhysics-Informed LearningDynamical SystemsRenewable EnergySmart Grids