A Nonparametric Discrete Hawkes Model with a Collapsed Gaussian-Process Prior

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited modeling flexibility of baseline intensities and triggering kernels in discrete-time Hawkes processes, this paper proposes the first fully nonparametric framework. It introduces a collapsed Gaussian process prior to jointly model time-varying baselines and event-triggering kernels, enabling efficient (nearly linear-time) maximum a posteriori inference via latent variable representation and closed-form projection. The method balances expressive power, interpretability, and computational feasibility. Experiments on synthetic data and real-world datasets—including U.S. terrorist incidents and cryptosporidiosis case counts—demonstrate substantial improvements in predictive log-likelihood and accurate characterization of burstiness, delayed effects, and seasonal background dynamics. The core contribution is the first end-to-end nonparametric learning of both baseline and triggering functions in discrete-time Hawkes processes, overcoming longstanding limitations imposed by fixed parametric kernel families and rigid baseline specifications.

Technology Category

Application Category

📝 Abstract
Hawkes process models are used in settings where past events increase the likelihood of future events occurring. Many applications record events as counts on a regular grid, yet discrete-time Hawkes models remain comparatively underused and are often constrained by fixed-form baselines and excitation kernels. In particular, there is a lack of flexible, nonparametric treatments of both the baseline and the excitation in discrete time. To this end, we propose the Gaussian Process Discrete Hawkes Process (GP-DHP), a nonparametric framework that places Gaussian process priors on both the baseline and the excitation and performs inference through a collapsed latent representation. This yields smooth, data-adaptive structure without prespecifying trends, periodicities, or decay shapes, and enables maximum a posteriori (MAP) estimation with near-linear-time (O(Tlog T)) complexity. A closed-form projection recovers interpretable baseline and excitation functions from the optimized latent trajectory. In simulations, GP-DHP recovers diverse excitation shapes and evolving baselines. In case studies on U.S. terrorism incidents and weekly Cryptosporidiosis counts, it improves test predictive log-likelihood over standard parametric discrete Hawkes baselines while capturing bursts, delays, and seasonal background variation. The results indicate that flexible discrete-time self-excitation can be achieved without sacrificing scalability or interpretability.
Problem

Research questions and friction points this paper is trying to address.

Models discrete-time event counts with flexible baseline and excitation functions
Enables data-adaptive structure without prespecified trends or decay shapes
Captures bursts, delays, and seasonal variation in event data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian process priors on baseline and excitation
Collapsed latent representation for MAP inference
Closed-form projection recovers interpretable functions
🔎 Similar Papers
No similar papers found.