Bringing Closure to False Discovery Rate Control: A General Principle for Multiple Testing

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of uniformly controlling error rates—particularly the false discovery rate (FDR)—in multiple hypothesis testing. We propose a general e-value-based closure principle, extending classical closure methods to expectation-based error metrics such as FDR. This yields the first adaptive closure framework applicable to arbitrary post-hoc error criteria and nominal significance levels. By integrating e-values, consistency conditions, and a generalized closure structure, we reconstruct and improve canonical procedures—including e-BH, BH, and BY—achieving rigorous FDR control while substantially enhancing statistical power and inferential flexibility. Empirical evaluations demonstrate that the proposed method outperforms existing algorithms in practice, without compromising theoretical guarantees.

Technology Category

Application Category

📝 Abstract
We present a novel necessary and sufficient principle for multiple testing methods controlling an expected loss. This principle asserts that every such multiple testing method is a special case of a general closed testing procedure based on e-values. It generalizes the Closure Principle, known to underlie all methods controlling familywise error and tail probabilities of false discovery proportions, to a large class of error rates -- in particular to the false discovery rate (FDR). By writing existing methods as special cases of this procedure, we can achieve uniform improvements existing multiple testing methods such as the e-Benjamini-Hochberg and the Benjamini-Yekutieli procedures, and the self-consistent method of Su (2018). We also show that methods derived using the closure principle have several valuable properties. For example, they generally control their error rate not just for one rejected set, but simultaneously over many, allowing post hoc flexibility for the researcher. Moreover, we show that because all multiple testing methods for all error metrics are derived from the same procedure, researchers may even choose the error metric post hoc. Under certain conditions, this flexibility even extends to post hoc choice of the nominal error rate.
Problem

Research questions and friction points this paper is trying to address.

Generalizing closure principle to FDR control
Unifying multiple testing methods via e-values
Enabling post hoc error metric selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

General closed testing procedure based on e-values
Uniform improvements to existing multiple testing methods
Post hoc flexibility in error metric selection
🔎 Similar Papers
No similar papers found.