Do Contemporary Causal Inference Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark

πŸ“… 2024-10-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Contemporary Conditional Average Treatment Effect (CATE) models lack systematic evaluation of their ability to capture real-world treatment effect heterogeneity. Method: We conduct the first large-scale benchmarking study of 16 state-of-the-art CATE algorithms across 12 real-world datasets and 43,200 observational sampling variants, introducing observational sampling into the CATE evaluation framework. We propose a novel statistical measure Q and its family of unbiased, asymptotically optimal estimators hat{Q}, enabling consistent selection of the model with minimal mean squared error. Contribution/Results: Results reveal systemic failure: 62% of CATE estimates underperform a null-effect baseline; 80% remain inferior to a constant-effect modelβ€”even when statistically significant; and orthogonal learning methods dominate in only 30% of scenarios. The study exposes fundamental limitations in modeling true heterogeneity and calls for a more robust, reproducible, and data-distribution-aware CATE evaluation paradigm.

Technology Category

Application Category

πŸ“ Abstract
We present unexpected findings from a large-scale benchmark study evaluating Conditional Average Treatment Effect (CATE) estimation algorithms, i.e., CATE models. By running 16 modern CATE models on 12 datasets and 43,200 sampled variants generated through diverse observational sampling strategies, we find that: (a) 62% of CATE estimates have a higher Mean Squared Error (MSE) than a trivial zero-effect predictor, rendering them ineffective; (b) in datasets with at least one useful CATE estimate, 80% still have higher MSE than a constant-effect model; and (c) Orthogonality-based models outperform other models only 30% of the time, despite widespread optimism about their performance. These findings highlight significant challenges in current CATE models and underscore the need for broader evaluation and methodological improvements. Our findings stem from a novel application of extit{observational sampling}, originally developed to evaluate Average Treatment Effect (ATE) estimates from observational methods with experiment data. To adapt observational sampling for CATE evaluation, we introduce a statistical parameter, $Q$, equal to MSE minus a constant and preserves the ranking of models by their MSE. We then derive a family of sample statistics, collectively called $hat{Q}$, that can be computed from real-world data. When used in observational sampling, $hat{Q}$ is an unbiased estimator of $Q$ and asymptotically selects the model with the smallest MSE. To ensure the benchmark reflects real-world heterogeneity, we handpick datasets where outcomes come from field rather than simulation. By integrating observational sampling, new statistics, and real-world datasets, the benchmark provides new insights into CATE model performance and reveals gaps in capturing real-world heterogeneity, emphasizing the need for more robust benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Evaluates performance of CATE estimation algorithms
Highlights challenges in capturing real-world heterogeneity
Introduces new statistical parameter for model evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

observational sampling for CATE evaluation
statistical parameter Q introduced
real-world datasets handpicked
πŸ”Ž Similar Papers
No similar papers found.