🤖 AI Summary
This study addresses the challenge that conventional “predict-then-optimize” approaches often fail to guarantee decision quality under autocorrelated uncertainty, especially with limited sample sizes. Focusing on vector autoregressive moving average (VARMA) processes, the authors propose the Autocorrelation-aware Optimization–Estimation (A-OVE) framework, which directly optimizes out-of-sample performance. The key innovation lies in the first integration of finite-sample optimality with autocorrelation structure, enabling efficient computation through recursively updated sufficient statistics. The work further reveals a potential disconnect between prediction accuracy and decision quality. Empirical results in portfolio optimization with transaction costs demonstrate that A-OVE substantially reduces regret and exhibits robustness to mild model misspecification.
📝 Abstract
Models that directly optimize for out-of-sample performance in the finite-sample regime have emerged as a promising alternative to traditional estimate-then-optimize approaches in data-driven optimization. In this work, we compare their performance in the context of autocorrelated uncertainties, specifically, under a Vector Autoregressive Moving Average VARMA(p,q) process. We propose an autocorrelated Optimize-via-Estimate (A-OVE) model that obtains an out-of-sample optimal solution as a function of sufficient statistics, and propose a recursive form for computing its sufficient statistics. We evaluate these models on a portfolio optimization problem with trading costs. A-OVE achieves low regret relative to a perfect information oracle, outperforming predict-then-optimize machine learning benchmarks. Notably, machine learning models with higher accuracy can have poorer decision quality, echoing the growing literature in data-driven optimization. Performance is retained under small mis-specification.