SCIR: A Self-Correcting Iterative Refinement Framework for Enhanced Information Extraction Based on Schema

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high fine-tuning cost and poor alignment with large language model (LLM) preferences in LLM-driven information extraction, this paper proposes SCIR—a novel framework featuring a dual-path self-correcting iterative refinement paradigm. It introduces a Dual-Path Self-Correcting module and a feedback-driven optimization mechanism; constructs MBSC, a bilingual (Chinese–English) self-correction dataset comprising over 100K instances, distilled via GPT-4’s discriminative capability to enhance preference alignment; and integrates multi-task joint fine-tuning with collaborative training of detection models. Evaluated on named entity recognition, relation extraction, and event extraction, SCIR achieves an average 5.27% improvement in span-based micro-F1, reduces training cost by 87%, and enables plug-and-play deployment for efficient, task-agnostic enhancement.

Technology Category

Application Category

📝 Abstract
Although Large language Model (LLM)-powered information extraction (IE) systems have shown impressive capabilities, current fine-tuning paradigms face two major limitations: high training costs and difficulties in aligning with LLM preferences. To address these issues, we propose a novel universal IE paradigm, the Self-Correcting Iterative Refinement (SCIR) framework, along with a Multi-task Bilingual (Chinese-English) Self-Correcting (MBSC) dataset containing over 100,000 entries. The SCIR framework achieves plug-and-play compatibility with existing LLMs and IE systems through its Dual-Path Self-Correcting module and feedback-driven optimization, thereby significantly reducing training costs. Concurrently, the MBSC dataset tackles the challenge of preference alignment by indirectly distilling GPT-4's capabilities into IE result detection models. Experimental results demonstrate that SCIR outperforms state-of-the-art IE methods across three key tasks: named entity recognition, relation extraction, and event extraction, achieving a 5.27 percent average improvement in span-based Micro-F1 while reducing training costs by 87 percent compared to baseline approaches. These advancements not only enhance the flexibility and accuracy of IE systems but also pave the way for lightweight and efficient IE paradigms.
Problem

Research questions and friction points this paper is trying to address.

Reduces training costs for LLM-based information extraction systems
Aligns extraction models with LLM preferences using self-correction
Improves accuracy across entity, relation, and event extraction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Correcting Iterative Refinement framework for plug-and-play compatibility
Dual-Path Self-Correcting module with feedback-driven optimization
Multi-task Bilingual dataset distilling GPT-4 capabilities indirectly
🔎 Similar Papers
No similar papers found.
Y
Yushen Fang
School of Computer Science and Technology, Huazhong University of Science and Technology
Jianjun Li
Jianjun Li
Professor
Artificial intelligenceComputer visionVideo codingMicroelectronics3D
M
Mingqian Ding
School of Computer Science and Technology, Huazhong University of Science and Technology
C
Chang Liu
School of Computer Science and Technology, Huazhong University of Science and Technology
X
Xinchi Zou
School of Computer Science and Technology, Huazhong University of Science and Technology
W
Wenqi Yang
School of Computer Science and Technology, Huazhong University of Science and Technology