Comparing three learn-then-test paradigms in a multivariate normal means problem

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the trade-off between selection bias and statistical power in the “learn-then-test” paradigm for multivariate normal mean testing. It systematically compares three prominent correction strategies: sample splitting, null augmentation, and resampling-based calibration. The work establishes, for the first time, a unified asymptotic power framework encompassing all three approaches. Within this framework, it reveals that null augmentation achieves near-optimal power when the number of augmented null hypotheses scales with the square root of the total number of hypotheses—a finding that challenges prevailing heuristic practices. Theoretical analysis further demonstrates that resampling-based correction yields the highest power, while null augmentation offers a close yet more computationally tractable alternative. Moreover, the optimal splitting proportion for sample splitting depends critically on the difficulty of the underlying structure learning task.

Technology Category

Application Category

📝 Abstract
Many modern procedures use the data to learn a structure and then leverage it to test many hypotheses. If the entire data is used at both stages, analytical or computational corrections for selection bias are required to ensure validity (post-learning adjustment). Alternatively, one can learn and/or test on masked versions of the data to avoid selection bias, either via information splitting or null augmentation}. Choosing among these three learn-then-test paradigms, and how much masking to employ for the latter two, are critical decisions impacting power that currently lack theoretical guidance. In a multivariate normal means model, we derive asymptotic power formulas for prototypical methods from each paradigm -- variants of sample splitting, conformal-style null augmentation, and resampling-based post-learning adjustment -- quantifying the power losses incurred by masking at each stage. For these paradigm representatives, we find that post-learning adjustment is most powerful, followed by null augmentation, and then information splitting. Moreover, null augmentation can be nearly as powerful as post-learning adjustment, while avoiding its challenges: the power of the former approaches that of the latter if the number of nulls used for augmentation is a vanishing fraction of the number of hypotheses. We also prove for a tractable proxy that the optimal number of nulls scales as the square root of the number of hypotheses, challenging existing heuristics. Finally, we characterize optimal tuning for information splitting by identifying an optimal split fraction and tying it to the difficulty of the learning problem. These results establish a theoretical foundation for key decisions in the deployment of learn-then-test methods.
Problem

Research questions and friction points this paper is trying to address.

learn-then-test
selection bias
multiple hypothesis testing
data masking
statistical power
Innovation

Methods, ideas, or system contributions that make the work stand out.

learn-then-test
selection bias correction
null augmentation
asymptotic power analysis
information splitting
🔎 Similar Papers
No similar papers found.
A
Abhinav Chakraborty
Department of Statistics, Columbia University, New York, NY 10027, U.S.A.
J
Junu Lee
Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
Eugene Katsevich
Eugene Katsevich
Assistant Professor, Wharton Statistics Department
multiple hypothesis testingvariable selectionhigh dimensional inferencestatistical genetics