🤖 AI Summary
This paper addresses the problem of diminished statistical power in nonparametric tests arising from excessive dependence between test statistics and auxiliary statistics. To resolve this, we propose a novel framework grounded in statistical independence principles. Methodologically, we reformulate hypothesis testing via a relativity principle and establish asymptotic independence between test and auxiliary statistics—yielding a Basu-type theoretical guarantee—while preserving distributional invariance under both null and alternative hypotheses. Integrating decision-theoretic criteria with explicit independence constraints, our approach systematically enhances classical tests, including Shapiro–Wilk, Anderson–Darling, Kolmogorov–Smirnov, and symmetry-center tests. Extensive simulations demonstrate that the proposed methods significantly improve power over conventional approaches while retaining robustness and computational efficiency, making them broadly applicable across diverse nonparametric settings.
📝 Abstract
This paper introduces a decision-theoretic framework for constructing and evaluating test statistics based on their relationship with ancillary statistics-quantities whose distributions remain fixed under the null and alternative hypotheses. Rather than focusing solely on maximizing discriminatory power, the proposed approach emphasizes reducing dependence between a test statistic and relevant ancillary structures. We show that minimizing such dependence can yield most powerful (MP) procedures. A Basu-type independence result is established, and we demonstrate that certain MP statistics also characterize the underlying data distribution. The methodology is illustrated through modifications of classical nonparametric tests, including the Shapiro-Wilk, Anderson-Darling, and Kolmogorov-Smirnov tests, as well as a test for the center of symmetry. Simulation studies highlight the power and robustness of the proposed procedures. The framework is computationally simple and offers a principled strategy for improving statistical testing.