Aligning Vision to Language: Text-Free Multimodal Knowledge Graph Construction for Enhanced LLMs Reasoning

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-based knowledge graphs (KGs) inadequately address knowledge incompleteness and hallucination in large language models (LLMs) during multimodal reasoning, while multimodal KGs (MMKGs) suffer from narrow semantics due to manual annotation and noisy vision–semantics alignments. This paper proposes the first **text-free MMKG construction paradigm**: (1) cascaded vision–language models align image features with textual semantics to generate image-specific descriptions; (2) a **cross-modal similarity verification mechanism** quantifies semantic consistency to filter alignment noise; and (3) **direct entity–image linking** eliminates intermediate textual representations, significantly improving storage efficiency and semantic fidelity. Experiments demonstrate that the constructed MMKG substantially enhances LLM accuracy in multimodal reasoning over unlabeled images and markedly reduces hallucination. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Multimodal reasoning in Large Language Models (LLMs) struggles with incomplete knowledge and hallucination artifacts, challenges that textual Knowledge Graphs (KGs) only partially mitigate due to their modality isolation. While Multimodal Knowledge Graphs (MMKGs) promise enhanced cross-modal understanding, their practical construction is impeded by semantic narrowness of manual text annotations and inherent noise in visual-semantic entity linkages. In this paper, we propose Vision-align-to-Language integrated Knowledge Graph (VaLiK), a novel approach for constructing MMKGs that enhances LLMs reasoning through cross-modal information supplementation. Specifically, we cascade pre-trained Vision-Language Models (VLMs) to align image features with text, transforming them into descriptions that encapsulate image-specific information. Furthermore, we developed a cross-modal similarity verification mechanism to quantify semantic consistency, effectively filtering out noise introduced during feature alignment. Even without manually annotated image captions, the refined descriptions alone suffice to construct the MMKG. Compared to conventional MMKGs construction paradigms, our approach achieves substantial storage efficiency gains while maintaining direct entity-to-image linkage capability. Experimental results on multimodal reasoning tasks demonstrate that LLMs augmented with VaLiK outperform previous state-of-the-art models. Our code is published at https://github.com/Wings-Of-Disaster/VaLiK.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLMs reasoning with multimodal knowledge graphs
Addresses noise in visual-semantic entity linkages
Improves storage efficiency in MMKG construction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cascades VLMs to align image features with text
Develops cross-modal similarity verification mechanism
Constructs MMKGs without manual image captions
🔎 Similar Papers
No similar papers found.
J
Junming Liu
Shanghai Artificial Intelligence Laboratory, Tongji University
S
Siyuan Meng
Shanghai Artificial Intelligence Laboratory, East China Normal University
Y
Yanting Gao
Tongji University
S
Song Mao
Shanghai Artificial Intelligence Laboratory
Pinlong Cai
Pinlong Cai
Shanghai Artificial Intelligence Laboratory
Artificial IntelligenceDecision IntelligenceKnowledge Systems
G
Guohang Yan
Shanghai Artificial Intelligence Laboratory
Yirong Chen
Yirong Chen
Stanford University
Zilin Bian
Zilin Bian
Assistant Professor, Rochester Institute of Technology
Smart CityIntelligent Transportation SystemDigital Twin
Botian Shi
Botian Shi
Shanghai Artificial Intelligence Laboratory
VLMsDocument UnderstandingAutonomous Driving
D
Ding Wang
Shanghai Artificial Intelligence Laboratory