🤖 AI Summary
This paper addresses the challenges of control and optimization for complex dynamic systems in process systems engineering (PSE) under uncertainty. It provides a systematic review of reinforcement learning (RL) applications tailored specifically to PSE—distinct from general-purpose RL surveys—by introducing a structured tutorial and technical roadmap covering value-based methods, policy gradients, and actor-critic frameworks. Methodological adaptability is analyzed across canonical PSE domains: batch/continuous process control, real-time optimization, and supply chain management. Key contributions include synthesis of representative success cases and identification of three critical challenges: model-data hybrid modeling, safety-constrained decision-making, and multi-timescale optimization. The paper proposes three future research directions: interpretable RL, physics-informed learning, and digital twin integration. Collectively, this work offers both theoretical foundations and practical guidance for advancing intelligent automation in PSE.
📝 Abstract
Sequential decision making under uncertainty is central to many Process Systems Engineering (PSE) challenges, where traditional methods often face limitations related to controlling and optimizing complex and stochastic systems. Reinforcement Learning (RL) offers a data-driven approach to derive control policies for such challenges. This paper presents a survey and tutorial on RL methods, tailored for the PSE community. We deliver a tutorial on RL, covering fundamental concepts and key algorithmic families including value-based, policy-based and actor-critic methods. Subsequently, we survey existing applications of these RL techniques across various PSE domains, such as in fed-batch and continuous process control, process optimization, and supply chains. We conclude with PSE focused discussion of specialized techniques and emerging directions. By synthesizing the current state of RL algorithm development and implications for PSE this work identifies successes, challenges, trends, and outlines avenues for future research at the interface of these fields.