🤖 AI Summary
Existing text-based knowledge graphs (KGs) inadequately address knowledge incompleteness and hallucination in large language models (LLMs) during multimodal reasoning, while multimodal KGs (MMKGs) suffer from narrow semantics due to manual annotation and noisy vision–semantics alignments. This paper proposes the first **text-free MMKG construction paradigm**: (1) cascaded vision–language models align image features with textual semantics to generate image-specific descriptions; (2) a **cross-modal similarity verification mechanism** quantifies semantic consistency to filter alignment noise; and (3) **direct entity–image linking** eliminates intermediate textual representations, significantly improving storage efficiency and semantic fidelity. Experiments demonstrate that the constructed MMKG substantially enhances LLM accuracy in multimodal reasoning over unlabeled images and markedly reduces hallucination. The implementation is publicly available.
📝 Abstract
Multimodal reasoning in Large Language Models (LLMs) struggles with incomplete knowledge and hallucination artifacts, challenges that textual Knowledge Graphs (KGs) only partially mitigate due to their modality isolation. While Multimodal Knowledge Graphs (MMKGs) promise enhanced cross-modal understanding, their practical construction is impeded by semantic narrowness of manual text annotations and inherent noise in visual-semantic entity linkages. In this paper, we propose Vision-align-to-Language integrated Knowledge Graph (VaLiK), a novel approach for constructing MMKGs that enhances LLMs reasoning through cross-modal information supplementation. Specifically, we cascade pre-trained Vision-Language Models (VLMs) to align image features with text, transforming them into descriptions that encapsulate image-specific information. Furthermore, we developed a cross-modal similarity verification mechanism to quantify semantic consistency, effectively filtering out noise introduced during feature alignment. Even without manually annotated image captions, the refined descriptions alone suffice to construct the MMKG. Compared to conventional MMKGs construction paradigms, our approach achieves substantial storage efficiency gains while maintaining direct entity-to-image linkage capability. Experimental results on multimodal reasoning tasks demonstrate that LLMs augmented with VaLiK outperform previous state-of-the-art models. Our code is published at https://github.com/Wings-Of-Disaster/VaLiK.