🤖 AI Summary
Traditional difference-in-differences (DID) methods fail under interval-valued data—common in surveys and administrative records—because the parallel trends assumption becomes untestable or violated, yielding uninformative or counterintuitive causal estimates. To address this, we propose a “parallel shift” identification strategy that extends the DID framework to accommodate interval-valued outcome variables. Grounded in nonparametric identification and robust inference, our approach imposes no strong distributional assumptions on interval endpoints, thereby enhancing the validity and reliability of causal effect estimation under measurement imprecision. We demonstrate its practical utility through a reanalysis of the seminal minimum wage study, producing meaningful bounds on causal effects. This work provides a theoretically coherent and empirically tractable solution to the pervasive challenge of interval data in policy evaluation.
📝 Abstract
Difference-in-differences (DID) is one of the most popular tools used to evaluate causal effects of policy interventions. This paper extends the DID methodology to accommodate interval outcomes, which are often encountered in empirical studies using survey or administrative data. We point out that a naive application or extension of the conventional parallel trends assumption may yield uninformative or counterintuitive results, and present a suitable identification strategy, called parallel shifts, which exhibits desirable properties. Practical attractiveness of the proposed method is illustrated by revisiting an influential minimum wage study by Card and Krueger (1994).