🤖 AI Summary
To address the limited modeling flexibility of baseline intensities and triggering kernels in discrete-time Hawkes processes, this paper proposes the first fully nonparametric framework. It introduces a collapsed Gaussian process prior to jointly model time-varying baselines and event-triggering kernels, enabling efficient (nearly linear-time) maximum a posteriori inference via latent variable representation and closed-form projection. The method balances expressive power, interpretability, and computational feasibility. Experiments on synthetic data and real-world datasets—including U.S. terrorist incidents and cryptosporidiosis case counts—demonstrate substantial improvements in predictive log-likelihood and accurate characterization of burstiness, delayed effects, and seasonal background dynamics. The core contribution is the first end-to-end nonparametric learning of both baseline and triggering functions in discrete-time Hawkes processes, overcoming longstanding limitations imposed by fixed parametric kernel families and rigid baseline specifications.
📝 Abstract
Hawkes process models are used in settings where past events increase the likelihood of future events occurring. Many applications record events as counts on a regular grid, yet discrete-time Hawkes models remain comparatively underused and are often constrained by fixed-form baselines and excitation kernels. In particular, there is a lack of flexible, nonparametric treatments of both the baseline and the excitation in discrete time. To this end, we propose the Gaussian Process Discrete Hawkes Process (GP-DHP), a nonparametric framework that places Gaussian process priors on both the baseline and the excitation and performs inference through a collapsed latent representation. This yields smooth, data-adaptive structure without prespecifying trends, periodicities, or decay shapes, and enables maximum a posteriori (MAP) estimation with near-linear-time (O(Tlog T)) complexity. A closed-form projection recovers interpretable baseline and excitation functions from the optimized latent trajectory. In simulations, GP-DHP recovers diverse excitation shapes and evolving baselines. In case studies on U.S. terrorism incidents and weekly Cryptosporidiosis counts, it improves test predictive log-likelihood over standard parametric discrete Hawkes baselines while capturing bursts, delays, and seasonal background variation. The results indicate that flexible discrete-time self-excitation can be achieved without sacrificing scalability or interpretability.