🤖 AI Summary
Causal estimation in longitudinal observational data is prone to bias due to model misspecification—particularly when restrictive structural assumptions (e.g., factor models) fail to hold.
Method: We propose a robust Bayesian causal inference framework that directly models unit- and time-specific potential causal effects. It employs generalized Bayesian inference to quantify the impact of model misspecification and introduces an adaptive learning rate ω, selected via proper scoring rules, to jointly optimize point estimation accuracy and posterior interval calibration—thereby grounding hyperparameter selection in decision-theoretic principles. The framework integrates time- and unit-varying effect heterogeneity with unobserved confounding adjustment.
Results: Simulation and empirical studies demonstrate substantial improvements in calibration, precision, and model robustness of causal effect estimates, effectively mitigating bias arising from structural assumption violations.
📝 Abstract
This paper develops a Bayesian framework for robust causal inference from longitudinal observational data. Many contemporary methods rely on structural assumptions, such as factor models, to adjust for unobserved confounding, but they can lead to biased causal estimands when mis-specified. We focus on directly estimating time--unit--specific causal effects and use generalised Bayesian inference to quantify model mis-specification and adjust for it, while retaining interpretable posterior inference. We select the learning rate~$ω$ based on a proper scoring rule that jointly evaluates point and interval accuracy of the causal estimand, thus providing a coherent, decision-theoretic foundation for tuning~$ω$. Simulation studies and applications to real data demonstrate improved calibration, sharpness, and robustness in estimating causal effects.