🤖 AI Summary
Large language models (LLMs) yield low-quality pseudo-labels for multimodal fake news detection, rendering them unsuitable for direct supervised training. Method: This paper proposes a synergistic framework integrating LLMs with global label propagation. It jointly leverages LLM-generated pseudo-labels, multimodal feature alignment, graph neural network (GNN)-based structural modeling, and global consistency propagation. Contribution/Results: The approach introduces two key innovations: (1) the first coupling mechanism between LLM-derived pseudo-labels and graph-structured global label propagation; and (2) a masked self-loop suppression strategy that prevents label leakage among training samples, thereby enhancing propagation robustness. Extensive experiments on multiple benchmark datasets demonstrate significant improvements over state-of-the-art methods, validating the complementary benefits of semantic generation capability and structured reasoning in multimodal fake news detection.
📝 Abstract
Large Language Models (LLMs) can assist multimodal fake news detection by predicting pseudo labels. However, LLM-generated pseudo labels alone demonstrate poor performance compared to traditional detection methods, making their effective integration non-trivial. In this paper, we propose Global Label Propagation Network with LLM-based Pseudo Labeling (GLPN-LLM) for multimodal fake news detection, which integrates LLM capabilities via label propagation techniques. The global label propagation can utilize LLM-generated pseudo labels, enhancing prediction accuracy by propagating label information among all samples. For label propagation, a mask-based mechanism is designed to prevent label leakage during training by ensuring that training nodes do not propagate their own labels back to themselves. Experimental results on benchmark datasets show that by synergizing LLMs with label propagation, our model achieves superior performance over state-of-the-art baselines.