🤖 AI Summary
This study addresses the low power of conventional tests in overidentified instrumental variable regressions with heteroskedastic or autocorrelated errors, even when the null and alternative hypotheses are well separated. By leveraging total variation distance and Kraft’s theorem, the paper characterizes the decision-theoretic frontier of the testing problem and establishes, for the first time, a theoretical bound on the gap between test power and significance level. It demonstrates that the conditional likelihood ratio (CLR) test asymptotically approaches this optimal frontier in non-degenerate cases, whereas other commonly used methods fail to do so. Only in degenerate (trivial) settings does the CLR’s power–size gap converge to one. Empirical analysis further reveals severe underperformance of traditional tests under the Yogo (2004) design.
📝 Abstract
We characterize the maximal attainable power-size gap in overidentified instrumental variables models with heteroskedastic or autocorrelated (HAC) errors. Using total variation distance and Kraft's theorem, we define the decision theoretic frontier of the testing problem. We show that Lagrange multiplier and conditional quasi likelihood ratio tests can have power arbitrarily close to size even when the null and alternative are well separated, because they do not fully exploit the reduced-form likelihood. In contrast, the conditional likelihood ratio (CLR) test uses the full reduced-form likelihood. We prove that the power-size gap of CLR converges to one if and only if the testing problem becomes trivial in total variation distance, so that CLR attains the decision theoretic frontier whenever any test can. An empirical illustration based on Yogo (2004) shows that these failures arise in empirically relevant configurations.