🤖 AI Summary
Hybrid comparative trials (HCTs) commonly assume exchangeability between external controls and the trial population—a condition frequently violated, leading to non-negligible bias from unmeasured confounding; yet existing work lacks rigorous quantification methods. This paper proposes a nonparametric sensitivity analysis framework that, for the first time, systematically adapts omitted-variable bias theory to HCTs. Without parametric modeling assumptions, it characterizes the maximum causal bias induced by unmeasured confounders by bounding the strength of covariate–outcome associations and the distinguishability between trial and external samples. The approach relies only on weak identification conditions, balancing statistical efficiency with inferential robustness. Simulation studies and theoretical analysis demonstrate that it effectively controls bias while preserving the inherent power of HCTs. The framework provides a practical, interpretable tool for robustness assessment in HCT design, thereby advancing the rigorous use of HCTs in real-world evidence generation.
📝 Abstract
In the digital era, it is easier than ever to collect and exploit rich covariate information in trials. Recent work explores how to use this information to integrate external controls, including the use of hybrid control trials (HCTs) where a randomized controlled trial is augmented with external controls. HCTs are of particular interest due to their ability to preserve partial randomization while also improving trial efficiency. However, most HCT estimators rely on an unrealistic assumption: that the external controls are drawn from the same population as the trial subjects (perhaps conditionally on covariates). There has been little formal work to quantify the inevitable bias introduced from a violation of this assumption, slowing the acceptance of HCT designs. To address this, we introduce a non-parametric sensitivity analysis that recognizes that the assumption can be reframed as a "no unobserved confounders" assumption. We leverage omitted variable bias methodologies to estimate the maximum bias introduced from unmeasured covariates, allowing for a critical evaluation of the causal gap that can invalidate significant findings. We show that with a relatively weak understanding of the covariate-outcome relationship and the distinguishability of trial and external subjects, this method reliably bounds bias while also allowing for gains in efficiency. We conclude by discussing considerations for designing and evaluating HCTs, drawing on insights from simulations and theoretical analyses.