Towards Explainable Temporal Reasoning in Large Language Models: A Structure-Aware Generative Framework

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability of large language models (LLMs) in temporal reasoning tasks. Methodologically: (1) it introduces soft graph tokens that explicitly encode temporal knowledge graphs into the LLM’s textual embedding space, enabling structure-guided reasoning for the first time; (2) it designs a structure–text prefix adapter to jointly integrate graph-structured and natural language representations. In terms of contributions and results: it establishes the first explainable evaluation benchmark covering multi-granularity temporal relations, achieving state-of-the-art performance on this benchmark. The proposed framework—GETER—significantly improves both explanation plausibility and factual consistency, while demonstrating strong cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
While large language models (LLMs) show great potential in temporal reasoning, most existing work focuses heavily on enhancing performance, often neglecting the explainable reasoning processes underlying the results. To address this gap, we introduce a comprehensive benchmark covering a wide range of temporal granularities, designed to systematically evaluate LLMs' capabilities in explainable temporal reasoning. Furthermore, our findings reveal that LLMs struggle to deliver convincing explanations when relying solely on textual information. To address challenge, we propose GETER, a novel structure-aware generative framework that integrates Graph structures with text for Explainable TEmporal Reasoning. Specifically, we first leverage temporal knowledge graphs to develop a temporal encoder that captures structural information for the query. Subsequently, we introduce a structure-text prefix adapter to map graph structure features into the text embedding space. Finally, LLMs generate explanation text by seamlessly integrating the soft graph token with instruction-tuning prompt tokens. Experimental results indicate that GETER achieves state-of-the-art performance while also demonstrating its effectiveness as well as strong generalization capabilities. Our dataset and code are available at https://github.com/carryTatum/GETER.
Problem

Research questions and friction points this paper is trying to address.

Enhancing explainable temporal reasoning in LLMs
Addressing LLMs' struggle with textual-only explanations
Integrating graph structures for improved temporal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Graph structures with text for reasoning
Uses temporal knowledge graphs for structural encoding
Maps graph features into text embedding space
🔎 Similar Papers
No similar papers found.
Zihao Jiang
Zihao Jiang
Wuhan University
Knowledge Graph 、Natural Language Processing、Large Language Model
B
Ben Liu
School of Computer Science, Wuhan University, China
Miao Peng
Miao Peng
The Hong Kong University of Science and Technology (Guangzhou)
Knowledge GraphNatural Language Processing
Wenjie Xu
Wenjie Xu
Phd Student, Wuhan University
Knowledge GraphNLP
Y
Yao Xiao
School of Computer Science, Wuhan University, China
Z
Zhenyan Shan
School of Computer Science, Wuhan University, China
M
Min Peng
School of Computer Science, Wuhan University, China