Learning Wisdom from Errors: Promoting LLM's Continual Relation Learning through Exploiting Error Cases

πŸ“… 2025-08-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address catastrophic forgetting in continual relation extraction (CRE), this paper proposes Error-Case–guided Instructional Contrastive Tuning (EC-ICT). EC-ICT is the first method to systematically leverage erroneous predictions on historical tasks, constructing an instruction-guided dual-objective fine-tuning framework: (i) instruction tuning to enhance relational discrimination, and (ii) contrastive learning to pull representations of correct samples closer while pushing those of erroneous ones farther apart. Integrated with memory replay and separate training on erroneous/correct samples, EC-ICT dynamically rectifies model cognitive biases. Evaluated on TACRED and FewRel, EC-ICT substantially outperforms existing state-of-the-art methods. Results demonstrate that error cases play a pivotal role in mitigating knowledge forgetting and bridging representational gaps between old and new tasks. This work establishes a novel paradigm for continual relation learning with large language models.

Technology Category

Application Category

πŸ“ Abstract
Continual Relation Extraction (CRE) aims to continually learn new emerging relations while avoiding catastrophic forgetting. Existing CRE methods mainly use memory replay and contrastive learning to mitigate catastrophic forgetting. However, these methods do not attach importance to the error cases that can reveal the model's cognitive biases more effectively. To address this issue, we propose an instruction-based continual contrastive tuning approach for Large Language Models (LLMs) in CRE. Different from existing CRE methods that typically handle the training and memory data in a unified manner, this approach splits the training and memory data of each task into two parts respectively based on the correctness of the initial responses and treats them differently through dual-task fine-tuning. In addition, leveraging the advantages of LLM's instruction-following ability, we propose a novel instruction-based contrastive tuning strategy for LLM to continuously correct current cognitive biases with the guidance of previous data in an instruction-tuning manner, which mitigates the gap between old and new relations in a more suitable way for LLMs. We experimentally evaluate our model on TACRED and FewRel, and the results show that our model achieves new state-of-the-art CRE performance with significant improvements, demonstrating the importance of specializing in exploiting error cases.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in continual relation extraction
Exploits error cases to reveal model cognitive biases
Proposes instruction-based contrastive tuning for LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-task fine-tuning based on error correctness
Instruction-based contrastive tuning strategy
Exploiting error cases to correct cognitive biases
πŸ”Ž Similar Papers
No similar papers found.
S
Shaozhe Yin
Beijing University of Posts and Telecommunications, Beijing, China
Jinyu Guo
Jinyu Guo
University of Electronic Science and Technology of China
Natural Language Processing
K
Kai Shuang
Beijing University of Posts and Telecommunications, Beijing, China
Xia Liu
Xia Liu
China National Institute of Standardization, Beijing, China
R
Ruize Ou
Beijing University of Posts and Telecommunications, Beijing, China