π€ AI Summary
In federated learning, spatiotemporally distributed data induce covariate shift, causing local empirical distributions across clients to deviate from the global underlying distribution and thereby degrading model generalization. To address this, we propose FIRE (Fisher Information Regularized Estimation), the first method to incorporate the Fisher information matrix into federated cross-validation. FIRE approximates the Fisher information distance between each clientβs local data and the global distribution, quantifying and correcting covariate shift. This distance is embedded as a scalable distribution alignment penalty in the loss function, enabling robust federated validation and training. Experiments demonstrate that FIRE achieves up to 5.1% higher accuracy than importance-weighted baselines on shifted validation sets and outperforms standard federated learning methods by 5.3%, significantly enhancing cross-distribution generalization.
π Abstract
When training data are fragmented across batches or federated-learned across different geographic locations, trained models manifest performance degradation. That degradation partly owes to covariate shift induced by data having been fragmented across time and space and producing dissimilar empirical training distributions. Each fragment's distribution is slightly different to a hypothetical unfragmented training distribution of covariates, and to the single validation distribution. To address this problem, we propose Fisher Information for Robust fEderated validation ( extbf{FIRE}). This method accumulates fragmentation-induced covariate shift divergences from the global training distribution via an approximate Fisher information. That term, which we prove to be a more computationally-tractable estimate, is then used as a per-fragment loss penalty, enabling scalable distribution alignment. FIRE outperforms importance weighting benchmarks by $5.1%$ at maximum and federated learning (FL) benchmarks by up to $5.3%$ on shifted validation sets.