🤖 AI Summary
This paper addresses robust online sampling under dynamically evolving target distributions: given an initial point set, new points are incrementally added to uniformly approximate a time-varying probability measure, while preserving the validity of historical samples amid distributional shifts. We propose a density-difference-driven online optimization framework, supported by continuous mean-field modeling for theoretical analysis, guaranteeing uniform approximation of the current target distribution at any time step. New point generation incurs only $O(n)$ computational complexity per iteration, achieving both provable convergence and high efficiency. Experiments demonstrate that our method attains state-of-the-art sampling quality on both static and dynamic distributions, significantly outperforming existing adaptive sampling strategies in terms of discrepancy metrics and empirical coverage.
📝 Abstract
We suppose we are given a list of points $x_1, dots, x_n in mathbb{R}$, a target probability measure $μ$ and are asked to add additional points $x_{n+1}, dots, x_{n+m}$ so that $x_1, dots, x_{n+m}$ is as close as possible to the distribution of $μ$; additionally, we want this to be true uniformly for all $m$. We propose a simple method that achieves this goal. It selects new points in regions where the existing set is lacking points and avoids regions that are already overly crowded. If we replace $μ$ by another measure $μ_2$ in the middle of the computation, the method dynamically adjusts and allows us to keep the original sampling points. $x_{n+1}$ can be computed in $mathcal{O}(n)$ steps and we obtain state-of-the-art results. It appears to be an interesting dynamical system in its own right; we analyze a continuous mean-field version that reflects much of the same behavior.