Autocorrelated Optimize-via-Estimate: Predict-then-Optimize versus Finite-sample Optimal

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that conventional “predict-then-optimize” approaches often fail to guarantee decision quality under autocorrelated uncertainty, especially with limited sample sizes. Focusing on vector autoregressive moving average (VARMA) processes, the authors propose the Autocorrelation-aware Optimization–Estimation (A-OVE) framework, which directly optimizes out-of-sample performance. The key innovation lies in the first integration of finite-sample optimality with autocorrelation structure, enabling efficient computation through recursively updated sufficient statistics. The work further reveals a potential disconnect between prediction accuracy and decision quality. Empirical results in portfolio optimization with transaction costs demonstrate that A-OVE substantially reduces regret and exhibits robustness to mild model misspecification.

Technology Category

Application Category

📝 Abstract
Models that directly optimize for out-of-sample performance in the finite-sample regime have emerged as a promising alternative to traditional estimate-then-optimize approaches in data-driven optimization. In this work, we compare their performance in the context of autocorrelated uncertainties, specifically, under a Vector Autoregressive Moving Average VARMA(p,q) process. We propose an autocorrelated Optimize-via-Estimate (A-OVE) model that obtains an out-of-sample optimal solution as a function of sufficient statistics, and propose a recursive form for computing its sufficient statistics. We evaluate these models on a portfolio optimization problem with trading costs. A-OVE achieves low regret relative to a perfect information oracle, outperforming predict-then-optimize machine learning benchmarks. Notably, machine learning models with higher accuracy can have poorer decision quality, echoing the growing literature in data-driven optimization. Performance is retained under small mis-specification.
Problem

Research questions and friction points this paper is trying to address.

autocorrelated uncertainty
finite-sample optimization
predict-then-optimize
data-driven optimization
out-of-sample performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimize-via-Estimate
autocorrelated uncertainty
finite-sample optimization
sufficient statistics
data-driven decision-making
🔎 Similar Papers
No similar papers found.
Zichun Wang
Zichun Wang
Student, West Virginia State University, U.S.A.
AIMachine LearningLLM
G
G. Loke
Durham University Business School
R
Ruiting Zuo
The Hong Kong University of Science and Technology (Guangzhou)