Prediction Loss Guided Decision-Focused Learning

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Two-stage prediction-decision frameworks suffer from instability under decision uncertainty—PFL ignores decision quality, while DFL optimizes decisions but exhibits poor convergence. Method: We propose an end-to-end decision-oriented learning framework that jointly integrates prediction and decision losses via gradient fusion: prediction gradients guide decision optimization, enhancing decision quality without compromising training stability; we further introduce a sigmoid-decaying gradient perturbation strategy embedded within a differentiable optimization framework, ensuring compatibility with diverse DFL solvers. Contribution/Results: We provide theoretical convergence guarantees without requiring auxiliary training. Empirical evaluation across three stochastic optimization tasks demonstrates significant reduction in decision regret, improved training stability, and robust superior performance—even in regimes where conventional methods fail.

Technology Category

Application Category

📝 Abstract
Decision-making under uncertainty is often considered in two stages: predicting the unknown parameters, and then optimizing decisions based on predictions. While traditional prediction-focused learning (PFL) treats these two stages separately, decision-focused learning (DFL) trains the predictive model by directly optimizing the decision quality in an end-to-end manner. However, despite using exact or well-approximated gradients, vanilla DFL often suffers from unstable convergence due to its flat-and-sharp loss landscapes. In contrast, PFL yields more stable optimization, but overlooks the downstream decision quality. To address this, we propose a simple yet effective approach: perturbing the decision loss gradient using the prediction loss gradient to construct an update direction. Our method requires no additional training and can be integrated with any DFL solvers. Using the sigmoid-like decaying parameter, we let the prediction loss gradient guide the decision loss gradient to train a predictive model that optimizes decision quality. Also, we provide a theoretical convergence guarantee to Pareto stationary point under mild assumptions. Empirically, we demonstrate our method across three stochastic optimization problems, showing promising results compared to other baselines. We validate that our approach achieves lower regret with more stable training, even in situations where either PFL or DFL struggles.
Problem

Research questions and friction points this paper is trying to address.

Optimizing decision quality under uncertainty
Stabilizing convergence in decision-focused learning
Integrating prediction and decision loss gradients
Innovation

Methods, ideas, or system contributions that make the work stand out.

Perturbs decision loss gradient with prediction loss
Uses sigmoid-like decaying parameter for guidance
Ensures stable convergence with theoretical guarantees
🔎 Similar Papers
No similar papers found.