🤖 AI Summary
InfoNCE, widely adopted for mutual information (MI) estimation, suffers from systematic bias and fails to provide consistent estimates of true MI. To address this, we propose InfoNCE-anchor: a plugin-style, consistent MI estimator derived by augmenting InfoNCE with learnable auxiliary anchor classes, thereby substantially reducing density-ratio estimation bias. Building upon scoring rule theory, we further establish a unified framework that reveals fundamental connections among contrastive learning objectives. Experiments demonstrate that InfoNCE-anchor achieves state-of-the-art accuracy in MI estimation. However, this improvement does not translate into gains on downstream self-supervised tasks, suggesting that representation learning relies more critically on structured density-ratio modeling than on precise MI数值. Our work provides new theoretical insights into the foundations of contrastive learning and the reliability of MI estimation.
📝 Abstract
The InfoNCE objective, originally introduced for contrastive representation learning, has become a popular choice for mutual information (MI) estimation, despite its indirect connection to MI. In this paper, we demonstrate why InfoNCE should not be regarded as a valid MI estimator, and we introduce a simple modification, which we refer to as InfoNCE-anchor, for accurate MI estimation. Our modification introduces an auxiliary anchor class, enabling consistent density ratio estimation and yielding a plug-in MI estimator with significantly reduced bias. Beyond this, we generalize our framework using proper scoring rules, which recover InfoNCE-anchor as a special case when the log score is employed. This formulation unifies a broad spectrum of contrastive objectives, including NCE, InfoNCE, and $f$-divergence variants, under a single principled framework. Empirically, we find that InfoNCE-anchor with the log score achieves the most accurate MI estimates; however, in self-supervised representation learning experiments, we find that the anchor does not improve the downstream task performance. These findings corroborate that contrastive representation learning benefits not from accurate MI estimation per se, but from the learning of structured density ratios.