🤖 AI Summary
Online learning systems face prohibitively high hyperparameter search costs due to persistent data distribution shifts. This paper proposes a two-stage efficient hyperparameter search paradigm tailored for non-stationary sequential data: Stage I employs lightweight data summarization and sequence forecasting to rapidly identify high-potential configurations; Stage II performs full training only on the shortlisted candidates. Departing from conventional performance-maximization search strategies, our approach prioritizes early, accurate pruning—significantly reducing redundant computation. Evaluated on the Criteo 1TB dataset, it achieves up to 10× reduction in search cost. Its efficacy and generalizability are further validated in large-scale industrial advertising systems. The core innovation lies in reframing hyperparameter optimization—from static performance tuning to dynamic, adaptivity-driven candidate screening—thereby overcoming the fundamental limitations of traditional methods in time-varying environments.
📝 Abstract
Online learning is the cornerstone of applications like recommendation and advertising systems, where models continuously adapt to shifting data distributions. Model training for such systems is remarkably expensive, a cost that multiplies during hyperparameter search. We introduce a two-stage paradigm to reduce this cost: (1) efficiently identifying the most promising configurations, and then (2) training only these selected candidates to their full potential. Our core insight is that focusing on accurate identification in the first stage, rather than achieving peak performance, allows for aggressive cost-saving measures. We develop novel data reduction and prediction strategies that specifically overcome the challenges of sequential, non-stationary data not addressed by conventional hyperparameter optimization. We validate our framework's effectiveness through a dual evaluation: first on the Criteo 1TB dataset, the largest suitable public benchmark, and second on an industrial advertising system operating at a scale two orders of magnitude larger. Our methods reduce the total hyperparameter search cost by up to 10$ imes$ on the public benchmark and deliver significant, validated efficiency gains in the industrial setting.