🤖 AI Summary
This paper challenges the prevailing view that causal modeling is a necessary condition for enhancing investment efficiency, systematically examining whether predictive models—despite structural misspecification (e.g., omitted variables or incorrect functional form)—can still generate valid investment signals and optimize portfolios.
Method: Leveraging formal analysis, statistical learning theory, and robust optimization, the study disentangles calibration error from causal deficiency as the primary driver of performance degradation, rigorously distinguishing signal ranking accuracy from magnitude bias.
Contribution/Results: Empirical counterexamples demonstrate that non-causal predictive models can yield directionally correct asset rankings, stable mean–variance frontiers, and portfolios with positive Sharpe ratios. The core contribution is establishing a “prediction-first” paradigm: predictive validity—distinct from causal interpretability—suffices to ensure investment effectiveness. This reframes the theoretical and practical foundations of financial machine learning, emphasizing prediction robustness over causal fidelity.
📝 Abstract
This paper challenges the claim that causal factor modeling is a necessary condition for investment efficiency. Through formal analysis and empirical counterexamples, we show that predictive models, even when structurally misspecified, can produce directionally accurate signals, valid mean-variance frontiers, and positive Sharpe ratios. Contrary to the assertion that causal omission leads to systematic inefficiency, we demonstrate that calibration errors, not lack of causal structure, are the primary source of performance degradation. Drawing from statistical learning theory and robust optimization, we argue that predictive validity, not causal interpretability, is the key metric in portfolio construction. Our theoretical results distinguish between signal ranking and sizing, and our experiments confirm that portfolio optimization remains viable even under omitted variables and nonlinearity. These findings support a predictive-first modeling philosophy: models may be wrong in structure, yet still useful in practice.