π€ AI Summary
This work addresses one-sample and two-sample hypothesis testing under unknown distributions by constructing binary tests based on relative entropy thresholds. Leveraging empirical distributions, large deviation theory, and information-theoretic tools, the authors develop a streamlined proof framework that intuitively reveals the asymptotic optimality of Hoeffdingβs test and naturally extends it to the two-sample setting. The main contributions include establishing the asymptotic optimality of the proposed test in both one-sample and two-sample scenarios and, for the first time, proving a strong converse theorem for the two-sample case. These results provide a unified and rigorous theoretical foundation for nonparametric hypothesis testing.
π Abstract
In this work, we revisit the one- and two-sample testing problems: binary hypothesis testing in which one or both distributions are unknown. For the one-sample test, we provide a more streamlined proof of the asymptotic optimality of Hoeffding's likelihood ratio test, which is equivalent to the threshold test of the relative entropy between the empirical distribution and the nominal distribution. The new proof offers an intuitive interpretation and naturally extends to the two-sample test where we show that a similar form of Hoeffding's test, namely a threshold test of the relative entropy between the two empirical distributions is also asymptotically optimal. A strong converse for the two-sample test is also obtained.