Complementarity-driven Representation Learning for Multi-modal Knowledge Graph Completion

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the problem of non-robust entity representations caused by imbalanced modality distributions in multimodal knowledge graph completion (MMKGC), this paper proposes the Hybrid Complementary Modality Experts (HCME) framework. HCME is the first to systematically model both intra-modal and inter-modal complementarity, incorporating a complementarity-guided multi-view fusion module that synergistically enhances structural, textual, and visual information. Additionally, it introduces an entropy-guided dynamic negative sampling strategy that prioritizes high-information, hard negative examples. Evaluated on five benchmark datasets, HCME consistently outperforms state-of-the-art methods, demonstrating superior representational discriminability and training stability. The framework establishes a novel paradigm for learning from imbalanced multimodal knowledge graphs, advancing both theoretical modeling and practical performance in MMKGC.

Technology Category

Application Category

📝 Abstract
Multi-modal Knowledge Graph Completion (MMKGC) aims to uncover hidden world knowledge in multimodal knowledge graphs by leveraging both multimodal and structural entity information. However, the inherent imbalance in multimodal knowledge graphs, where modality distributions vary across entities, poses challenges in utilizing additional modality data for robust entity representation. Existing MMKGC methods typically rely on attention or gate-based fusion mechanisms but overlook complementarity contained in multi-modal data. In this paper, we propose a novel framework named Mixture of Complementary Modality Experts (MoCME), which consists of a Complementarity-guided Modality Knowledge Fusion (CMKF) module and an Entropy-guided Negative Sampling (EGNS) mechanism. The CMKF module exploits both intra-modal and inter-modal complementarity to fuse multi-view and multi-modal embeddings, enhancing representations of entities. Additionally, we introduce an Entropy-guided Negative Sampling mechanism to dynamically prioritize informative and uncertain negative samples to enhance training effectiveness and model robustness. Extensive experiments on five benchmark datasets demonstrate that our MoCME achieves state-of-the-art performance, surpassing existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Address imbalance in multimodal knowledge graphs for robust entity representation
Exploit intra-modal and inter-modal complementarity in multi-modal data
Enhance training with dynamic negative sampling for model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Complementarity-driven multi-modal fusion
Entropy-guided negative sampling
Multi-view embedding enhancement
🔎 Similar Papers
No similar papers found.