🤖 AI Summary
To address the failure of online adaptation under sequential covariate shift (SCS)—where input distributions continuously evolve while conditional distributions remain invariant—this paper proposes FADE, a Fisher-information-geometry-based framework. FADE introduces an unsupervised drift signal grounded in information geometry, jointly leveraging the Cramér–Rao bound and KL divergence for label-free drift detection. It designs a Fisher dynamic adjustment mechanism that enables real-time model parameter updates without task boundaries or replay buffers. Furthermore, it incorporates Fisher regularization and a time-aware Fisher–KL fusion strategy to support decentralized adaptation in federated learning settings. Evaluated on seven cross-modal benchmarks, FADE significantly outperforms state-of-the-art methods including TENT and DIW, achieving up to a 19% accuracy gain under severe shifts. Theoretical analysis establishes bounded regret and parameter consistency.
📝 Abstract
Modern machine learning systems operating in dynamic environments often face extit{sequential covariate shift} (SCS), where input distributions evolve over time while the conditional distribution remains stable. We introduce FADE (Fisher-based Adaptation to Dynamic Environments), a lightweight and theoretically grounded framework for robust learning under SCS. FADE employs a shift-aware regularization mechanism anchored in Fisher information geometry, guiding adaptation by modulating parameter updates based on sensitivity and stability. To detect significant distribution changes, we propose a Cramer-Rao-informed shift signal that integrates KL divergence with temporal Fisher dynamics. Unlike prior methods requiring task boundaries, target supervision, or experience replay, FADE operates online with fixed memory and no access to target labels. Evaluated on seven benchmarks spanning vision, language, and tabular data, FADE achieves up to 19% higher accuracy under severe shifts, outperforming methods such as TENT and DIW. FADE also generalizes naturally to federated learning by treating heterogeneous clients as temporally fragmented environments, enabling scalable and stable adaptation in decentralized settings. Theoretical analysis guarantees bounded regret and parameter consistency, while empirical results demonstrate FADE's robustness across modalities and shift intensities.