🤖 AI Summary
This study addresses treatment leakage bias arising when textual data, even if generated prior to intervention, acts as a confounder due to implicit treatment signals or future-oriented language. The authors formally define this problem within both statistical and set-theoretic frameworks and propose four text distillation methods—similarity-based paragraph removal, distant supervision classification, salient feature elimination, and iterative nullspace projection—to remove treatment-predictive content while preserving confounding information. Empirical evaluations on synthetic data and a real-world analysis of IMF policy effects on child mortality demonstrate that moderate distillation substantially reduces causal estimation bias without sacrificing precision, whereas excessive distillation degrades performance, thereby validating both the efficacy and necessity of the proposed approach.
📝 Abstract
Text-based causal inference increasingly employs textual data as proxies for unobserved confounders, yet this approach introduces a previously undertheorized source of bias: treatment leakage. Treatment leakage occurs when text intended to capture confounding information also contains signals predictive of treatment status, thereby inducing post-treatment bias in causal estimates. Critically, this problem can arise even when documents precede treatment assignment, as authors may employ future-referencing language that anticipates subsequent interventions. Despite growing recognition of this issue, no systematic methods exist for identifying and mitigating treatment leakage in text-as-confounder applications. This paper addresses this gap through three contributions. First, we provide formal statistical and set-theoretic definitions of treatment leakage that clarify when and why bias occurs. Second, we propose four text distillation methods -- similarity-based passage removal, distant supervision classification, salient feature removal, and iterative nullspace projection -- designed to eliminate treatment-predictive content while preserving confounder information. Third, we validate these methods through simulations using synthetic text and an empirical application examining International Monetary Fund structural adjustment programs and child mortality. Our findings indicate that moderate distillation optimally balances bias reduction against confounder retention, whereas overly stringent approaches degrade estimate precision.