🤖 AI Summary
Traditional two-stage least squares (2SLS) for addressing endogeneity in linear regression relies on explicit instrumental variables (IVs) and is incompatible with sequential modeling frameworks. Method: This paper investigates how large language models (LLMs) based on the Transformer architecture can implicitly resolve endogeneity via in-context learning. We theoretically establish that Transformer self-attention inherently implements a bilevel optimization process, converging exponentially to the 2SLS solution without requiring explicit IV specification. Building on this insight, we propose an IV-aware contextual pretraining paradigm with provable error bounds. Contribution/Results: Experiments demonstrate substantial improvements in coefficient estimation accuracy and contextual prediction robustness under endogeneity. Our work provides the first theoretically grounded, pretrainable LLM-based framework for causal inference that implicitly learns instrument-like representations—guaranteeing convergence and enabling end-to-end integration into sequence modeling pipelines.
📝 Abstract
We explore the capability of transformers to address endogeneity in in-context linear regression. Our main finding is that transformers inherently possess a mechanism to handle endogeneity effectively using instrumental variables (IV). First, we demonstrate that the transformer architecture can emulate a gradient-based bi-level optimization procedure that converges to the widely used two-stage least squares $( extsf{2SLS})$ solution at an exponential rate. Next, we propose an in-context pretraining scheme and provide theoretical guarantees showing that the global minimizer of the pre-training loss achieves a small excess loss. Our extensive experiments validate these theoretical findings, showing that the trained transformer provides more robust and reliable in-context predictions and coefficient estimates than the $ extsf{2SLS}$ method, in the presence of endogeneity.