🤖 AI Summary
This paper investigates uniform mean estimability of distribution families over {0,1}^ℕ, moving beyond the classical P-Glivenko–Cantelli framework that relies exclusively on empirical means. It introduces *uniform mean estimability* (UME-learnability): the existence of a single estimator—potentially non-empirical—that achieves uniform consistent estimation of means across the entire family. By analyzing the geometric structure of mean vectors in the probability space and leveraging constructive estimation techniques alongside set-theoretic operations, the authors show that mean separability is sufficient but not necessary for UME-learnability, and explicitly construct the first known example of a non-separable yet UME-learnable family. Furthermore, they establish a novel criterion proving that UME-learnability is closed under countable unions—thereby resolving an open conjecture posed by Cohen et al. (2025).
📝 Abstract
We characterize conditions under which collections of distributions on ${0,1}^mathbb{N}$ admit uniform estimation of their mean. Prior work from Vapnik and Chervonenkis (1971) has focused on uniform convergence using the empirical mean estimator, leading to the principle known as $P-$ Glivenko-Cantelli. We extend this framework by moving beyond the empirical mean estimator and introducing Uniform Mean Estimability, also called $UME-$ learnability, which captures when a collection permits uniform mean estimation by any arbitrary estimator. We work on the space created by the mean vectors of the collection of distributions. For each distribution, the mean vector records the expected value in each coordinate. We show that separability of the mean vectors is a sufficient condition for $UME-$ learnability. However, we show that separability of the mean vectors is not necessary for $UME-$ learnability by constructing a collection of distributions whose mean vectors are non-separable yet $UME-$ learnable using techniques fundamentally different from those used in our separability-based analysis. Finally, we establish that countable unions of $UME-$ learnable collections are also $UME-$ learnable, solving a conjecture posed in Cohen et al. (2025).