๐ค AI Summary
This work systematically evaluates large language models (LLMs)โspecifically GPT-3.5 and LLaMA-2โon discourse-level event relation extraction (ERE), targeting four complex relation types: coreference, temporal, causal, and subevent. We identify critical systematic limitations: event hallucination in long documents, violations of transitivity constraints, failure to model long-range dependencies, and breakdowns in reasoning over high-density event contexts. Through zero-shot and few-shot prompting, supervised fine-tuning (SFT), and combined quantitative and qualitative analysis, we demonstrate that LLMs consistently underperform lightweight supervised models; SFT yields only marginal gains with poor generalization. This study is the first to rigorously expose fundamental architectural and representational bottlenecks of LLMs in discourse-level ERE. To support reproducibility and further research, we publicly release all code and evaluation data.
๐ Abstract
Large Language Models (LLMs) have demonstrated proficiency in a wide array of natural language processing tasks. However, its effectiveness over discourse-level event relation extraction (ERE) tasks remains unexplored. In this paper, we assess the effectiveness of LLMs in addressing discourse-level ERE tasks characterized by lengthy documents and intricate relations encompassing coreference, temporal, causal, and subevent types. Evaluation is conducted using an commercial model, GPT-3.5, and an open-source model, LLaMA-2. Our study reveals a notable underperformance of LLMs compared to the baseline established through supervised learning. Although Supervised Fine-Tuning (SFT) can improve LLMs performance, it does not scale well compared to the smaller supervised baseline model. Our quantitative and qualitative analysis shows that LLMs have several weaknesses when applied for extracting event relations, including a tendency to fabricate event mentions, and failures to capture transitivity rules among relations, detect long distance relations, or comprehend contexts with dense event mentions. Code available at: https://github.com/WeiKangda/LLM-ERE.git.