TPP-LLM: Modeling Temporal Point Processes by Efficiently Fine-Tuning Large Language Models

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly modeling semantic and temporal dynamics in event sequence prediction. We propose the first end-to-end framework that directly feeds raw event text into a large language model (LLM) while integrating it with a temporal point process (TPP). Methodologically, we incorporate learnable time embeddings and parameter-efficient fine-tuning (PEFT) to preserve the LLM’s strong semantic comprehension while accurately capturing temporal dependencies. Our key innovation lies in bypassing hand-crafted features or separate text encoders—thereby avoiding information loss—and enabling joint optimization of semantic and temporal representations. Evaluated on multiple real-world event sequence datasets, our approach substantially outperforms state-of-the-art methods, achieving significant gains in both event type and occurrence time prediction accuracy, while reducing training overhead by over 30%.

Technology Category

Application Category

📝 Abstract
Temporal point processes (TPPs) are widely used to model the timing and occurrence of events in domains such as social networks, transportation systems, and e-commerce. In this paper, we introduce TPP-LLM, a novel framework that integrates large language models (LLMs) with TPPs to capture both the semantic and temporal aspects of event sequences. Unlike traditional methods that rely on categorical event type representations, TPP-LLM directly utilizes the textual descriptions of event types, enabling the model to capture rich semantic information embedded in the text. While LLMs excel at understanding event semantics, they are less adept at capturing temporal patterns. To address this, TPP-LLM incorporates temporal embeddings and employs parameter-efficient fine-tuning (PEFT) methods to effectively learn temporal dynamics without extensive retraining. This approach improves both predictive accuracy and computational efficiency. Experimental results across diverse real-world datasets demonstrate that TPP-LLM outperforms state-of-the-art baselines in sequence modeling and event prediction, highlighting the benefits of combining LLMs with TPPs.
Problem

Research questions and friction points this paper is trying to address.

Modeling event timing and occurrence in dynamic systems
Integrating semantic and temporal aspects of event sequences
Improving predictive accuracy with efficient LLM fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs with temporal point processes
Uses textual descriptions for semantic information
Employs parameter-efficient fine-tuning for temporal dynamics
🔎 Similar Papers
No similar papers found.
Z
Zefang Liu
Georgia Institute of Technology
Yinzhu Quan
Yinzhu Quan
Georgia Institute of Technology