🤖 AI Summary
This work addresses the Job-Shop Scheduling Problem (JSP) and Flexible Job-Shop Scheduling Problem (FJSP) by proposing an offline reinforcement learning (RL) framework that learns effective scheduling policies directly from historical data—without online environment interaction. Methodologically, it trains on synthetically generated data from stochastic heuristics—a counterintuitive choice that empirically outperforms training on high-quality heuristic data. The framework introduces a conservative discrete quantile Actor-Critic architecture, integrating delayed policy updates and return distribution modeling, while explicitly representing the action space as machine-operation pairs. Experiments demonstrate that the method surpasses the original heuristic baseline using only 10–20 training instances, and consistently outperforms both state-of-the-art offline and online RL baselines on JSP and FJSP benchmarks. It achieves substantial gains in sample efficiency and generalization across diverse problem instances.
📝 Abstract
The Job-Shop Scheduling Problem (JSP) and Flexible Job-Shop Scheduling Problem (FJSP), are canonical combinatorial optimization problems with wide-ranging applications in industrial operations. In recent years, many online reinforcement learning (RL) approaches have been proposed to learn constructive heuristics for JSP and FJSP. Although effective, these online RL methods require millions of interactions with simulated environments that may not capture real-world complexities, and their random policy initialization leads to poor sample efficiency. To address these limitations, we introduce Conservative Discrete Quantile Actor-Critic (CDQAC), a novel offline RL algorithm that learns effective scheduling policies directly from historical data, eliminating the need for costly online interactions, while maintaining the ability to improve upon suboptimal training data. CDQAC couples a quantile-based critic with a delayed policy update, estimating the return distribution of each machine-operation pair rather than selecting pairs outright. Our extensive experiments demonstrate CDQAC's remarkable ability to learn from diverse data sources. CDQAC consistently outperforms the original data-generating heuristics and surpasses state-of-the-art offline and online RL baselines. In addition, CDQAC is highly sample efficient, requiring only 10-20 training instances to learn high-quality policies. Surprisingly, we find that CDQAC performs better when trained on data generated by a random heuristic than when trained on higher-quality data from genetic algorithms and priority dispatching rules.