🤖 AI Summary
This survey addresses relation extraction (RE) research in the Transformer era (2019–2024). To overcome limitations of manual literature reviews, we propose the first automated framework for systematic literature acquisition and annotation tailored to RE. Our methodology integrates 34 survey papers, 64 benchmark datasets, and 104 models, enabling multidimensional analysis across methodological evolution, benchmark resources, and semantic web techniques. Unlike prior work, our approach constructs a structured, holistic knowledge map covering models, data, and evaluation—thereby clarifying developmental trajectories of dominant paradigms, including prompt learning, instruction tuning, and knowledge-enhanced modeling. We identify four critical open challenges: robustness to annotation noise, few-shot generalization, cross-domain transferability, and model interpretability. The resulting synthesis provides researchers with a reusable analytical framework and an authoritative reference system for advancing RE research.
📝 Abstract
This article presents a systematic review of relation extraction (RE) research since the advent of Transformer-based models. Using an automated framework to collect and annotate publications, we analyze 34 surveys, 64 datasets, and 104 models published between 2019 and 2024. The review highlights methodological advances, benchmark resources, and the integration of semantic web technologies. By consolidating results across multiple dimensions, the study identifies current trends, limitations, and open challenges, offering researchers and practitioners a comprehensive reference for understanding the evolution and future directions of RE.