🤖 AI Summary
This paper investigates online regression in data streams leveraging predictive information from future samples, focusing on transductive online learning—where the sequence of instances is known a priori—and its robust generalization under noisy predictions. We propose a “learning-augmented” online algorithmic framework and, for the first time, fully characterize the minimax expected regret of transductive online regression via the fat-shattering dimension, revealing a fundamental distinction from the adversarial setting. Our algorithm adaptively exploits prediction quality: it approaches the transductive optimum when predictions are accurate, yet retains the worst-case regret bound when predictions degrade. Theoretically, our approach renders certain function classes—traditionally unlearnable in standard online regression—learnable in predictable environments, achieving strictly superior performance.
📝 Abstract
Motivated by the predictable nature of real-life in data streams, we study online regression when the learner has access to predictions about future examples. In the extreme case, called transductive online learning, the sequence of examples is revealed to the learner before the game begins. For this setting, we fully characterize the minimax expected regret in terms of the fat-shattering dimension, establishing a separation between transductive online regression and (adversarial) online regression. Then, we generalize this setting by allowing for noisy or emph{imperfect} predictions about future examples. Using our results for the transductive online setting, we develop an online learner whose minimax expected regret matches the worst-case regret, improves smoothly with prediction quality, and significantly outperforms the worst-case regret when future example predictions are precise, achieving performance similar to the transductive online learner. This enables learnability for previously unlearnable classes under predictable examples, aligning with the broader learning-augmented model paradigm.