Survey and Tutorial of Reinforcement Learning Methods in Process Systems Engineering

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenges of control and optimization for complex dynamic systems in process systems engineering (PSE) under uncertainty. It provides a systematic review of reinforcement learning (RL) applications tailored specifically to PSE—distinct from general-purpose RL surveys—by introducing a structured tutorial and technical roadmap covering value-based methods, policy gradients, and actor-critic frameworks. Methodological adaptability is analyzed across canonical PSE domains: batch/continuous process control, real-time optimization, and supply chain management. Key contributions include synthesis of representative success cases and identification of three critical challenges: model-data hybrid modeling, safety-constrained decision-making, and multi-timescale optimization. The paper proposes three future research directions: interpretable RL, physics-informed learning, and digital twin integration. Collectively, this work offers both theoretical foundations and practical guidance for advancing intelligent automation in PSE.

Technology Category

Application Category

📝 Abstract
Sequential decision making under uncertainty is central to many Process Systems Engineering (PSE) challenges, where traditional methods often face limitations related to controlling and optimizing complex and stochastic systems. Reinforcement Learning (RL) offers a data-driven approach to derive control policies for such challenges. This paper presents a survey and tutorial on RL methods, tailored for the PSE community. We deliver a tutorial on RL, covering fundamental concepts and key algorithmic families including value-based, policy-based and actor-critic methods. Subsequently, we survey existing applications of these RL techniques across various PSE domains, such as in fed-batch and continuous process control, process optimization, and supply chains. We conclude with PSE focused discussion of specialized techniques and emerging directions. By synthesizing the current state of RL algorithm development and implications for PSE this work identifies successes, challenges, trends, and outlines avenues for future research at the interface of these fields.
Problem

Research questions and friction points this paper is trying to address.

Surveying reinforcement learning methods for process systems engineering
Addressing sequential decision making under uncertainty challenges
Providing tutorial on RL applications in process control optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven reinforcement learning for process control
Survey of value-based and policy-based RL methods
Application of RL techniques in process optimization
🔎 Similar Papers
No similar papers found.
M
Maximilian Bloor
Department of Chemical Engineering, Imperial College London, London
M
Max Mowbray
Department of Chemical Engineering, Imperial College London, London
E
Ehecatl Antonio del Rio Chanona
Department of Chemical Engineering, Imperial College London, London
Calvin Tsay
Calvin Tsay
Imperial College London, Department of Computing
OptimizationMachine LearningProcess Systems EngineeringProcess Control