🤖 AI Summary
Conventional Bayesian optimization methods infer parameters from summary statistics (e.g., means or quantiles) of stochastic simulation outputs, discarding trajectory-level structural information. Method: We propose a trajectory-level Bayesian optimization framework that jointly models input parameters and random seeds via a Gaussian process surrogate; introduces a trajectory-level likelihood function; leverages common random numbers (CRN) to ensure output comparability; and employs an adaptive grid Thompson sampling strategy, combining likelihood-based filtering with Metropolis–Hastings densification for dynamic input-space refinement. Contribution/Results: Evaluated on a compartmental model and an agent-based epidemiological model, our approach significantly improves identification efficiency of observation-consistent trajectories. Compared to standard parameter-level inference, it achieves higher sampling efficiency and faster convergence while preserving temporal dynamics and stochastic structure.
📝 Abstract
Bayesian optimization (BO) is a powerful framework for estimating parameters of computationally expensive simulation models, particularly in settings where the likelihood is intractable and evaluations are costly. In stochastic models every simulation is run with a specific parameter set and an implicit or explicit random seed, where each parameter set and random seed combination generates an individual realization, or trajectory, sampled from an underlying random process. Existing BO approaches typically rely on summary statistics over the realizations, such as means, medians, or quantiles, potentially limiting their effectiveness when trajectory-level information is desired. We propose a trajectory-oriented Bayesian optimization method that incorporates a Gaussian process (GP) surrogate using both input parameters and random seeds as inputs, enabling direct inference at the trajectory level. Using a common random number (CRN) approach, we define a surrogate-based likelihood over trajectories and introduce an adaptive Thompson Sampling algorithm that refines a fixed-size input grid through likelihood-based filtering and Metropolis-Hastings-based densification. This approach concentrates computation on statistically promising regions of the input space while balancing exploration and exploitation. We apply the method to stochastic epidemic models, a simple compartmental and a more computationally demanding agent-based model, demonstrating improved sampling efficiency and faster identification of data-consistent trajectories relative to parameter-only inference.