🤖 AI Summary
Current large language models struggle to effectively model event evolution in multi-turn dialogues, resulting in incomplete context tracking and poor response coherence. To address this, we propose an event-centric dynamic modeling framework that introduces a novel dynamic event graph structure, explicitly distinguishing and incrementally updating core events versus supporting events to enable context-aware incremental reasoning. Our method comprises four components: (1) dynamic event graph construction, (2) fine-grained event role classification, (3) an event-graph-guided attention mechanism, and (4) zero-shot event-augmented inference. Crucially, the framework requires neither full dialogue history retrieval nor model fine-tuning. Evaluated on two benchmark datasets, it significantly improves response coherence and event relevance, achieving state-of-the-art performance.
📝 Abstract
Existing large language models (LLMs) have shown remarkable progress in dialogue systems. However, many approaches still overlook the fundamental role of events throughout multi-turn interactions, leading to extbf{incomplete context tracking}. Without tracking these events, dialogue systems often lose coherence and miss subtle shifts in user intent, causing disjointed responses. To bridge this gap, we present extbf{EventWeave}, an event-centric framework that identifies and updates both core and supporting events as the conversation unfolds. Specifically, we organize these events into a dynamic event graph, which represents the interplay between extbf{core events} that shape the primary idea and extbf{supporting events} that provide critical context during the whole dialogue. By leveraging this dynamic graph, EventWeave helps models focus on the most relevant events when generating responses, thus avoiding repeated visits of the entire dialogue history. Experimental results on two benchmark datasets show that EventWeave improves response quality and event relevance without fine-tuning.