Rates of Convergence of Generalised Variational Inference Posteriors under Prior Misspecification

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the convergence rate and robustness of generalized variational inference (GVI) under prior misspecification, particularly confronting challenges arising from non-KL divergences, restricted measure spaces, and intractable posteriors. We establish sufficient conditions for the existence and uniqueness of the GVI posterior on Polish spaces, prove its almost-sure convergence to the loss-minimizing neighborhood, and derive a prior-agnostic upper bound on the convergence rate. Innovatively, we introduce bounded-divergence constraints and measure-space truncation, integrating functional analysis and probability measure theory to provide the first unified convergence guarantees for both centralized and federated GVI under severe prior misspecification. Our results significantly enhance robustness to model misspecification and lay a rigorous theoretical foundation for distributed learning and other complex inference settings.

Technology Category

Application Category

📝 Abstract
We prove rates of convergence and robustness to prior misspecification within a Generalised Variational Inference (GVI) framework with bounded divergences. This addresses a significant open challenge for GVI and Federated GVI that employ a different divergence to the Kullback--Leibler under prior misspecification, operate within a subset of possible probability measures, and result in intractable posteriors. Our theoretical contributions cover severe prior misspecification while relying on our ability to restrict the space of possible GVI posterior measures, and infer properties based on this space. In particular, we are able to establish sufficient conditions for existence and uniqueness of GVI posteriors on arbitrary Polish spaces, prove that the GVI posterior measure concentrates on a neighbourhood of loss minimisers, and extend this to rates of convergence regardless of the prior measure.
Problem

Research questions and friction points this paper is trying to address.

Establishes convergence rates for variational inference with bounded divergences
Addresses robustness to prior misspecification in generalized variational inference
Proves posterior concentration on neighborhoods of loss minimizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proves convergence rates for bounded divergence GVI
Establishes GVI posterior existence on Polish spaces
Shows posterior concentration near loss minimizers
T
Terje Mildner
Department of Statistics, University of Warwick
Paris Giampouras
Paris Giampouras
Assistant Professor @ University of Warwick
generative modelsrepresentation learningadversarial robustnesscontinual learning
T
Theodoros Damoulas
Department of Statistics, University of Warwick; Department of Computer Science, University of Warwick