Uniform Convergence Beyond Glivenko-Cantelli

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates uniform mean estimability of distribution families over {0,1}^ℕ, moving beyond the classical P-Glivenko–Cantelli framework that relies exclusively on empirical means. It introduces *uniform mean estimability* (UME-learnability): the existence of a single estimator—potentially non-empirical—that achieves uniform consistent estimation of means across the entire family. By analyzing the geometric structure of mean vectors in the probability space and leveraging constructive estimation techniques alongside set-theoretic operations, the authors show that mean separability is sufficient but not necessary for UME-learnability, and explicitly construct the first known example of a non-separable yet UME-learnable family. Furthermore, they establish a novel criterion proving that UME-learnability is closed under countable unions—thereby resolving an open conjecture posed by Cohen et al. (2025).

Technology Category

Application Category

📝 Abstract
We characterize conditions under which collections of distributions on ${0,1}^mathbb{N}$ admit uniform estimation of their mean. Prior work from Vapnik and Chervonenkis (1971) has focused on uniform convergence using the empirical mean estimator, leading to the principle known as $P-$ Glivenko-Cantelli. We extend this framework by moving beyond the empirical mean estimator and introducing Uniform Mean Estimability, also called $UME-$ learnability, which captures when a collection permits uniform mean estimation by any arbitrary estimator. We work on the space created by the mean vectors of the collection of distributions. For each distribution, the mean vector records the expected value in each coordinate. We show that separability of the mean vectors is a sufficient condition for $UME-$ learnability. However, we show that separability of the mean vectors is not necessary for $UME-$ learnability by constructing a collection of distributions whose mean vectors are non-separable yet $UME-$ learnable using techniques fundamentally different from those used in our separability-based analysis. Finally, we establish that countable unions of $UME-$ learnable collections are also $UME-$ learnable, solving a conjecture posed in Cohen et al. (2025).
Problem

Research questions and friction points this paper is trying to address.

Extending uniform convergence beyond empirical mean estimators
Characterizing conditions for uniform mean estimability in distributions
Establishing learnability for countable unions of UME-learnable collections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Uniform Mean Estimability beyond empirical mean
Shows separability of mean vectors enables uniform estimation
Proves non-separable distributions can also be UME-learnable
🔎 Similar Papers
No similar papers found.
T
Tanmay Devale
Purdue University
P
Pramith Devulapalli
Purdue University
Steve Hanneke
Steve Hanneke
Purdue University
Learning TheoryStatisticsArtificial Intelligence