Retrieval-Augmented Generation-based Relation Extraction

📅 2024-04-20
🏛️ arXiv.org
📈 Citations: 10
Influential: 0
📄 PDF
🤖 AI Summary
To address the susceptibility of large language models (LLMs) to training data bias—leading to hallucinations—and their heavy reliance on extensive annotated data and computational resources in relation extraction (RE), this paper proposes RAG4RE, the first systematic framework integrating retrieval-augmented generation (RAG) into RE. RAG4RE requires no fine-tuning and synergistically combines dense passage retrieval (DPR), semantic matching, and LLMs (e.g., Flan-T5, Llama2, Mistral) for robust relation identification under few-shot or zero-shot settings. Evaluated on standard benchmarks including TACRED and TACREV, RAG4RE achieves up to a 7.2% absolute F1 improvement over strong LLM-only baselines and conventional supervised methods. These results empirically validate that retrieval-guided generation effectively mitigates hallucination, reduces dependency on labeled data, and enhances generalization across diverse RE tasks.

Technology Category

Application Category

📝 Abstract
Information Extraction (IE) is a transformative process that converts unstructured text data into a structured format by employing entity and relation extraction (RE) methodologies. The identification of the relation between a pair of entities plays a crucial role within this framework. Despite the existence of various techniques for relation extraction, their efficacy heavily relies on access to labeled data and substantial computational resources. In addressing these challenges, Large Language Models (LLMs) emerge as promising solutions; however, they might return hallucinating responses due to their own training data. To overcome these limitations, Retrieved-Augmented Generation-based Relation Extraction (RAG4RE) in this work is proposed, offering a pathway to enhance the performance of relation extraction tasks. This work evaluated the effectiveness of our RAG4RE approach utilizing different LLMs. Through the utilization of established benchmarks, such as TACRED, TACREV, Re-TACRED, and SemEval RE datasets, our aim is to comprehensively evaluate the efficacy of our RAG4RE approach. In particularly, we leverage prominent LLMs including Flan T5, Llama2, and Mistral in our investigation. The results of our study demonstrate that our RAG4RE approach surpasses performance of traditional RE approaches based solely on LLMs, particularly evident in the TACRED dataset and its variations. Furthermore, our approach exhibits remarkable performance compared to previous RE methodologies across both TACRED and TACREV datasets, underscoring its efficacy and potential for advancing RE tasks in natural language processing.
Problem

Research questions and friction points this paper is trying to address.

Improving relation extraction accuracy by reducing LLM hallucinations
Overcoming dependency on labeled data and computational resources
Enhancing entity relationship identification using retrieval-augmented generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented Generation enhances relation extraction
Uses multiple LLMs including Flan T5 and Llama2
Outperforms traditional approaches on TACRED datasets
🔎 Similar Papers
No similar papers found.
S
Sefika Efeoglu
Electrical Engineering and Computer Science Department, Technische Universitaet Berlin, Berlin, Germany
Adrian Paschke
Adrian Paschke
Professor, Computer Science, Freie Universitaet Berlin
Corporate Semantic WebMachine LearningArtificial IntelligenceData AnalyticsSemantic Technologies