🤖 AI Summary
This paper addresses inference for partially identified parameters in incomplete models. We propose a unified inferential method that simultaneously achieves robustness to model misspecification and information efficiency. Our core innovation is the first construction of a Kullback–Leibler (KL) information criterion that jointly accommodates both incompleteness and misspecification robustness, yielding a nonempty, identifiable set of pseudo-true parameters. The method fully exploits information from both discrete and continuous covariates and enables computationally tractable inference via an asymptotically pivotal Rao score statistic. We establish theoretical consistency and asymptotic normality under both correct specification and misspecification. Compared to existing approaches, our framework substantially enhances the reliability and applicability of partial identification inference, providing the first unified inferential framework for incomplete models with set-valued predictions that is both theoretically rigorous and practically implementable.
📝 Abstract
This paper proposes an information-based inference method for partially identified parameters in incomplete models that is valid both when the model is correctly specified and when it is misspecified. Key features of the method are: (i) it is based on minimizing a suitably defined Kullback-Leibler information criterion that accounts for incompleteness of the model and delivers a non-empty pseudo-true set; (ii) it is computationally tractable; (iii) its implementation is the same for both correctly and incorrectly specified models; (iv) it exploits all information provided by variation in discrete and continuous covariates; (v) it relies on Rao's score statistic, which is shown to be asymptotically pivotal.