Can LLMs Find Fraudsters? Multi-level LLM Enhanced Graph Fraud Detection

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph-based fraud detection methods rely on preprocessed node embeddings and fixed graph structures, neglecting rich semantic cues inherent in raw textual attributes. While large language models (LLMs) excel at textual representation learning, their effective multimodal integration with graph structural information remains challenging. To address this, we propose a novel multi-level LLM-enhanced framework that introduces, for the first time, dedicated text-graph co-enhancers at both the node-type level and relation level. Specifically, LLMs extract fine-grained semantic representations and external knowledge from node texts, which are then dynamically fused into graph neural networks via learnable gating mechanisms—enabling semantic-guided, structure-aware node representations. Extensive experiments on four real-world datasets demonstrate that our method consistently outperforms state-of-the-art approaches, achieving average F1-score improvements of 3.2–7.8 percentage points. These results validate the effectiveness and generalizability of multi-level semantic-structural joint modeling for graph fraud detection.

Technology Category

Application Category

📝 Abstract
Graph fraud detection has garnered significant attention as Graph Neural Networks (GNNs) have proven effective in modeling complex relationships within multimodal data. However, existing graph fraud detection methods typically use preprocessed node embeddings and predefined graph structures to reveal fraudsters, which ignore the rich semantic cues contained in raw textual information. Although Large Language Models (LLMs) exhibit powerful capabilities in processing textual information, it remains a significant challenge to perform multimodal fusion of processed textual embeddings with graph structures. In this paper, we propose a extbf{M}ulti-level extbf{L}LM extbf{E}nhanced Graph Fraud extbf{D}etection framework called MLED. In MLED, we utilize LLMs to extract external knowledge from textual information to enhance graph fraud detection methods. To integrate LLMs with graph structure information and enhance the ability to distinguish fraudsters, we design a multi-level LLM enhanced framework including type-level enhancer and relation-level enhancer. One is to enhance the difference between the fraudsters and the benign entities, the other is to enhance the importance of the fraudsters in different relations. The experiments on four real-world datasets show that MLED achieves state-of-the-art performance in graph fraud detection as a generalized framework that can be applied to existing methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing fraud detection using LLMs and graph structures
Integrating textual embeddings with multimodal graph data
Improving fraudster differentiation via multi-level LLM enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs extract external knowledge from text
Multi-level enhancer integrates LLMs with graphs
Type and relation enhancers distinguish fraudsters
🔎 Similar Papers
No similar papers found.