🤖 AI Summary
Empirical difference-in-differences (DiD) applications in complex settings—such as staggered adoption, multiple treatment periods, and heterogeneous treatment timing—lack a unified conceptual and methodological framework, leading to ambiguity in identification assumptions, causal interpretation, and estimation validity. Method: We develop a classification system for DiD designs grounded in the potential outcomes framework, systematically clarifying identifying assumptions, causal interpretation boundaries, and consistency conditions for extensions including multi-period DiD, event-study designs, heterogeneous treatment timing, and weighted estimators. Our approach integrates DiD, inverse-probability weighting, robust standard error corrections, and event-study methodology, while formally justifying covariate adjustment and weighting schemes. Contribution: This paper provides the first end-to-end normative framework covering design selection, estimation implementation, and inference correction. It enhances transparency, replicability, and policy-relevance of complex DiD analyses, offering a plug-and-play methodological guide for empirical social science research.
📝 Abstract
Difference-in-Differences (DiD) is arguably the most popular quasi-experimental research design. Its canonical form, with two groups and two periods, is well-understood. However, empirical practices can be ad hoc when researchers go beyond that simple case. This article provides an organizing framework for discussing different types of DiD designs and their associated DiD estimators. It discusses covariates, weights, handling multiple periods, and staggered treatments. The organizational framework, however, applies to other extensions of DiD methods as well.