Sharp Bounds on the Variance of General Regression Adjustment in Randomized Experiments

📅 2024-10-31
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In finite-population causal inference under random assignment, the variance of the average treatment effect (ATE) is inherently unidentifiable due to the fundamental problem of causal inference—potential outcomes are never jointly observed—leading to upward bias in conventional variance estimators. This paper extends Aronow et al. (2014)’s asymptotically sharp variance bound for the difference-in-means estimator to general differentiable regression-adjusted estimators. It establishes, for the first time, an asymptotically tight upper bound on the variance of ATE estimators with arbitrary covariate adjustment—including linear regression—within Neyman’s finite-population framework. The bound’s sharpness is rigorously proven by integrating covariance non-identifiability analysis with sharp bound theory. Simulation and empirical results demonstrate that the proposed bound substantially outperforms existing alternatives, and that regression adjustment meaningfully reduces estimation variance, thereby enhancing inferential efficiency.

Technology Category

Application Category

📝 Abstract
Building on statistical foundations laid by Neyman [1923] a century ago, a growing literature focuses on problems of causal inference that arise in the context of randomized experiments where the target of inference is the average treatment effect in a finite population and random assignment determines which subjects are allocated to one of the experimental conditions. In this framework, variances of average treatment effect estimators remain unidentified because they depend on the covariance between treated and untreated potential outcomes, which are never jointly observed. Aronow et al. [2014] provide an estimator for the variance of the difference-in-means estimator that is asymptotically sharp. In practice, researchers often use some form of covariate adjustment, such as linear regression when estimating the average treatment effect. Here we extend the Aronow et al. [2014] result, providing asymptotically sharp variance bounds for general regression adjustment. We apply these results to linear regression adjustment and show benefits both in a simulation as well as an empirical application.
Problem

Research questions and friction points this paper is trying to address.

Sharp bounds on variance of regression adjustment in experiments
Addressing upward bias in conventional variance estimators
Extending asymptotically sharp variance bounds to general regression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymptotically sharp variance bounds
General regression adjustment extension
Linear regression application benefits
🔎 Similar Papers
No similar papers found.
Jonas M. Mikhaeil
Jonas M. Mikhaeil
PhD Student (Statistics), Columbia University
StatisticsCausal InferenceSocial Statistics
D
Donald P. Green
Department of Political Science, Columbia University, New York