Understanding Structured Financial Data with LLMs: A Case Study on Fraud Detection

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Financial fraud detection suffers from poor interpretability of tabular models and labor-intensive feature engineering, while existing large language models (LLMs) exhibit limited performance when applied directly due to their inability to handle high-dimensional tabular data, extreme class imbalance, and lack of domain-specific contextual grounding. To address these challenges, we propose FinFRE-RAG—a two-stage framework: first, importance-weighted compression transforms high-dimensional tabular features into natural-language sequences, enabling the first-ever “table-to-text” feature serialization; second, label-aware instance retrieval augments in-context learning (RAG) to improve both inference accuracy and attribution-based interpretability. The method is compatible with multi-scale open-source LLMs (e.g., Llama, Phi, Qwen). Evaluated on four public benchmarks, FinFRE-RAG achieves significantly higher F1 and Matthews Correlation Coefficient (MCC) than direct prompting, matches or approaches state-of-the-art tabular models in certain settings, and generates high-quality, traceable attribution explanations—demonstrating the practical utility of LLMs as analytical assistants.

Technology Category

Application Category

📝 Abstract
Detecting fraud in financial transactions typically relies on tabular models that demand heavy feature engineering to handle high-dimensional data and offer limited interpretability, making it difficult for humans to understand predictions. Large Language Models (LLMs), in contrast, can produce human-readable explanations and facilitate feature analysis, potentially reducing the manual workload of fraud analysts and informing system refinements. However, they perform poorly when applied directly to tabular fraud detection due to the difficulty of reasoning over many features, the extreme class imbalance, and the absence of contextual information. To bridge this gap, we introduce FinFRE-RAG, a two-stage approach that applies importance-guided feature reduction to serialize a compact subset of numeric/categorical attributes into natural language and performs retrieval-augmented in-context learning over label-aware, instance-level exemplars. Across four public fraud datasets and three families of open-weight LLMs, FinFRE-RAG substantially improves F1/MCC over direct prompting and is competitive with strong tabular baselines in several settings. Although these LLMs still lag behind specialized classifiers, they narrow the performance gap and provide interpretable rationales, highlighting their value as assistive tools in fraud analysis.
Problem

Research questions and friction points this paper is trying to address.

Improves fraud detection using LLMs for interpretability
Addresses class imbalance and feature complexity in tabular data
Enhances performance via feature reduction and retrieval-augmented learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage approach with feature reduction and RAG
Serializes compact feature subset into natural language
Uses retrieval-augmented in-context learning with exemplars
🔎 Similar Papers
No similar papers found.