🤖 AI Summary
This paper addresses the limited scalability and generalization of reinforcement learning (RL) methods for semiconductor front-end wafer fabrication scheduling in real industrial settings. We propose an Evolution Strategies (ES)-based framework specifically designed for bottleneck tool selection and combination. Unlike policy gradient (PG) approaches, our method integrates multi-agent state modeling, discrete-event simulation, and a parallel CPU-based training architecture. On the Minifab/SMT2020 benchmark, it achieves double-digit latency reduction and single-digit throughput improvement; on real fab data, it reduces average latency by up to 4% and increases throughput by 1%. Key contributions are: (i) the first successful adaptation of ES to highly constrained wafer fab scheduling, markedly improving generalization across varying load and fault conditions; (ii) empirical validation that diverse training data is critical for robust scheduling performance; and (iii) near-linear scaling of training efficiency with the number of CPU cores.
📝 Abstract
Benchmark datasets are crucial for evaluating approaches to scheduling or dispatching in the semiconductor industry during the development and deployment phases. However, commonly used benchmark datasets like the Minifab or SMT2020 lack the complex details and constraints found in real-world scenarios. To mitigate this shortcoming, we compare open-source simulation models with a real industry dataset to evaluate how optimization methods scale with different levels of complexity. Specifically, we focus on Reinforcement Learning methods, performing optimization based on policy-gradient and Evolution Strategies. Our research provides insights into the effectiveness of these optimization methods and their applicability to realistic semiconductor frontend fab simulations. We show that our proposed Evolution Strategies-based method scales much better than a comparable policy-gradient-based approach. Moreover, we identify the selection and combination of relevant bottleneck tools to control by the agent as crucial for an efficient optimization. For the generalization across different loading scenarios and stochastic tool failure patterns, we achieve advantages when utilizing a diverse training dataset. While the overall approach is computationally expensive, it manages to scale well with the number of CPU cores used for training. For the real industry dataset, we achieve an improvement of up to 4% regarding tardiness and up to 1% regarding throughput. For the less complex open-source models Minifab and SMT2020, we observe double-digit percentage improvement in tardiness and single digit percentage improvement in throughput by use of Evolution Strategies.