Generalization Guarantees for Representation Learning via Data-Dependent Gaussian Mixture Priors

πŸ“… 2025-02-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the insufficient theoretical guarantees on generalization error in representation learning by proposing a data-dependent Gaussian mixture prior modeling framework. Methodologically, it integrates the Minimum Description Length (MDL) principle with variational inference, constructing a symmetric prior based on training- and test-time latent variables and deriving tight expected and tail bounds on generalization error via the relative entropy. Theoretically, it reveals two key insights: (i) the prior learning process naturally induces a weighted attention mechanism, and (ii) the generalization bound explicitly incorporates encoder architectural simplicity. Empirically, the approach significantly outperforms VIB and CDVIB across multiple benchmarks, yielding tighter theoretical bounds and demonstrating consistent improvements in generalization performance, thereby validating both the theoretical analysis and practical efficacy.

Technology Category

Application Category

πŸ“ Abstract
We establish in-expectation and tail bounds on the generalization error of representation learning type algorithms. The bounds are in terms of the relative entropy between the distribution of the representations extracted from the training and"test'' datasets and a data-dependent symmetric prior, i.e., the Minimum Description Length (MDL) of the latent variables for the training and test datasets. Our bounds are shown to reflect the"structure"and"simplicity'' of the encoder and significantly improve upon the few existing ones for the studied model. We then use our in-expectation bound to devise a suitable data-dependent regularizer; and we investigate thoroughly the important question of the selection of the prior. We propose a systematic approach to simultaneously learning a data-dependent Gaussian mixture prior and using it as a regularizer. Interestingly, we show that a weighted attention mechanism emerges naturally in this procedure. Our experiments show that our approach outperforms the now popular Variational Information Bottleneck (VIB) method as well as the recent Category-Dependent VIB (CDVIB).
Problem

Research questions and friction points this paper is trying to address.

Generalization error bounds for representation learning
Data-dependent Gaussian mixture priors
Improvement over Variational Information Bottleneck methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-dependent Gaussian mixture priors
Minimum Description Length regularization
Weighted attention mechanism integration
πŸ”Ž Similar Papers
No similar papers found.
Milad Sefidgaran
Milad Sefidgaran
Senior ML Researcher
Machine LearningDeep LearningInformation Theory
A
Abdellatif Zaidi
UniversitΓ© Gustave Eiffel, France
P
Piotr Krasnowski
Paris Research Center, Huawei Technologies France