In Defense of Defensive Forecasting

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the limitations of traditional forecasting methods—namely, their reliance on probabilistic modeling assumptions and hypothetical future distributions—by proposing a hypothesis-free, non-asymptotically optimal defensive prediction framework. Instead of assuming a data-generating mechanism, the method formulates forecasting as an adversarial sequential game and achieves robustness via online correction of past prediction errors. Its key contributions are threefold: (1) a unified defensive prediction paradigm grounded in Vovk’s game-theoretic framework and online convex optimization; (2) a hyperparameter-free recursive calibration algorithm, providing the first theoretical guarantees for online conformal prediction; and (3) attainment of the optimal $O(sqrt{T})$ regret bound, strong calibration, and exact calibration under arbitrary sequences. Experiments demonstrate its simplicity and near-optimal performance across online learning, expert aggregation, and conformal prediction tasks.

Technology Category

Application Category

📝 Abstract
This tutorial provides a survey of algorithms for Defensive Forecasting, where predictions are derived not by prognostication but by correcting past mistakes. Pioneered by Vovk, Defensive Forecasting frames the goal of prediction as a sequential game, and derives predictions to minimize metrics no matter what outcomes occur. We present an elementary introduction to this general theory and derive simple, near-optimal algorithms for online learning, calibration, prediction with expert advice, and online conformal prediction.
Problem

Research questions and friction points this paper is trying to address.

Algorithms for Defensive Forecasting derived by correcting past mistakes
Predictions minimize metrics regardless of outcomes in sequential game
Simple near-optimal algorithms for online learning and prediction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Corrects past mistakes for predictions
Minimizes metrics in sequential game
Provides near-optimal online learning algorithms
🔎 Similar Papers
No similar papers found.