Selective Randomization Inference for Adaptive Experiments

📅 2024-05-11
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
In adaptive experiments, data-driven design adjustments invalidate conventional statistical inference; existing methods suffer from narrow applicability and strong assumptions. This paper proposes a selective randomization inference framework: it models the data-generating process via a directed acyclic graph (DAG) and implements conditional-on-selection inference within randomization tests. It is the first systematic integration of this principle into randomization-based inference—requiring neither i.i.d. nor parametric modeling assumptions, and accommodating arbitrary adaptive experimental designs. To address disconnected confidence intervals, we innovatively introduce a holdout-unit method. Theoretically and empirically, our approach strictly controls selective Type-I error and constructs valid confidence intervals for homogeneous treatment effects. It substantially outperforms conventional methods in both robustness and generality.

Technology Category

Application Category

📝 Abstract
Adaptive experiments use preliminary analyses of the data to inform further course of action and are commonly used in many disciplines including medical and social sciences. Because the null hypothesis and experimental design are not pre-specified, it has long been recognized that statistical inference for adaptive experiments is not straightforward. Most existing methods only apply to specific adaptive designs and rely on strong assumptions. In this work, we propose selective randomization inference as a general framework for analysing adaptive experiments. In a nutshell, our approach applies conditional post-selection inference to randomization tests. By using directed acyclic graphs to describe the data generating process, we derive a selective randomization p-value that controls the selective type-I error without requiring independent and identically distributed data or any other modelling assumptions. We show how rejection sampling and Markov Chain Monte Carlo can be used to compute the selective randomization p-values and construct confidence intervals for a homogeneous treatment effect. To mitigate the risk of disconnected confidence intervals, we propose the use of hold-out units. Lastly, we demonstrate our method and compare it with other randomization tests using synthetic and real-world data.
Problem

Research questions and friction points this paper is trying to address.

Develops selective randomization inference for adaptive experiments
Addresses data-dependent null hypotheses and experimental designs
Controls selective type-I error without modeling assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective randomization inference for adaptive experiments
Conditional post-selection inference with randomization tests
Directed acyclic graphs describe data generating process
🔎 Similar Papers
No similar papers found.