๐ค AI Summary
Existing marked temporal point process (MTPP) models commonly employ channel-mixing strategies to embed heterogeneous event types into a shared latent space, which conflates type-specific dynamics, leading to performance degradation and overfitting. To address this, we propose a channel-independent architecture coupled with a type-aware inverse self-attention mechanism, integrated within an encoderโdecoder framework powered by an ordinary differential equation (ODE) backbone. Our approach explicitly decouples the dynamic evolution of each event type while simultaneously modeling cross-type dependencies. This design mitigates feature entanglement, enhances model interpretability, and improves generalization. Extensive experiments on multiple real-world and synthetic MTPP benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches in both predictive accuracy and robustness.
๐ Abstract
Marked Temporal Point Processes (MTPPs) provide a principled framework for modeling asynchronous event sequences by conditioning on the history of past events. However, most existing MTPP models rely on channel-mixing strategies that encode information from different event types into a single, fixed-size latent representation. This entanglement can obscure type-specific dynamics, leading to performance degradation and increased risk of overfitting. In this work, we introduce ITPP, a novel channel-independent architecture for MTPP modeling that decouples event type information using an encoder-decoder framework with an ODE-based backbone. Central to ITPP is a type-aware inverted self-attention mechanism, designed to explicitly model inter-channel correlations among heterogeneous event types. This architecture enhances effectiveness and robustness while reducing overfitting. Comprehensive experiments on multiple real-world and synthetic datasets demonstrate that ITPP consistently outperforms state-of-the-art MTPP models in both predictive accuracy and generalization.