I-Con: A Unifying Framework for Representation Learning

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing representation learning paradigms—such as clustering, spectral methods, dimensionality reduction, contrastive learning, and supervised learning—employ diverse, ad hoc loss functions lacking a unified theoretical foundation. Method: This paper proposes I-Con, an information-theoretic unifying framework that models these paradigms as minimizing the integral KL divergence between two conditional distributions. Contribution/Results: I-Con is the first to uncover a shared implicit information-geometric structure across paradigms, enabling principled cross-paradigm loss composition and yielding an interpretable debiasing principle. Its theory-driven construction ensures both interpretability and generalization. Empirically, I-Con improves unsupervised classification accuracy on ImageNet-1K by 8% over prior state-of-the-art, and significantly mitigates bias in contrastive representation learning.

Technology Category

Application Category

📝 Abstract
As the field of representation learning grows, there has been a proliferation of different loss functions to solve different classes of problems. We introduce a single information-theoretic equation that generalizes a large collection of modern loss functions in machine learning. In particular, we introduce a framework that shows that several broad classes of machine learning methods are precisely minimizing an integrated KL divergence between two conditional distributions: the supervisory and learned representations. This viewpoint exposes a hidden information geometry underlying clustering, spectral methods, dimensionality reduction, contrastive learning, and supervised learning. This framework enables the development of new loss functions by combining successful techniques from across the literature. We not only present a wide array of proofs, connecting over 23 different approaches, but we also leverage these theoretical results to create state-of-the-art unsupervised image classifiers that achieve a +8% improvement over the prior state-of-the-art on unsupervised classification on ImageNet-1K. We also demonstrate that I-Con can be used to derive principled debiasing methods which improve contrastive representation learners.
Problem

Research questions and friction points this paper is trying to address.

Unifying diverse loss functions in representation learning
Generalizing modern machine learning loss functions
Improving unsupervised classification and debiasing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified information-theoretic framework for representation learning
Generalizes diverse loss functions via integrated KL divergence
Enhances unsupervised classification and debiasing techniques
🔎 Similar Papers
No similar papers found.
Shaden Alshammari
Shaden Alshammari
Graduate student at MIT
Machine LearningComputer Vision
J
John R. Hershey
Google
A
Axel Feldmann
MIT
W
W. T. Freeman
MIT, Google
M
M. Hamilton
MIT, Microsoft