Safe Deployment of Offline Reinforcement Learning via Input Convex Action Correction

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For high-risk exothermic polymerization in continuous stirred-tank reactors (CSTRs), conventional control methods suffer from steady-state offsets and degraded tracking performance near setpoints. Method: This paper proposes a safety-certifiable offline reinforcement learning framework that integrates input-convex neural networks (PICNNs) with behavior cloning and implicit Q-learning, trained solely on historical process data within a Gymnasium simulation environment. A novel gradient-based, runtime safety correction layer is introduced, which adaptively refines actions via state-dependent convex cost surfaces—enabling real-time, interpretable, and verifiable safety enhancement without retraining. Contribution/Results: Evaluated on representative operational scenarios—including startup and grade transitions—the method outperforms traditional controllers: it eliminates steady-state bias entirely, improves setpoint tracking accuracy, and guarantees closed-loop stability under safety constraints.

Technology Category

Application Category

📝 Abstract
Offline reinforcement learning (offline RL) offers a promising framework for developing control strategies in chemical process systems using historical data, without the risks or costs of online experimentation. This work investigates the application of offline RL to the safe and efficient control of an exothermic polymerisation continuous stirred-tank reactor. We introduce a Gymnasium-compatible simulation environment that captures the reactor's nonlinear dynamics, including reaction kinetics, energy balances, and operational constraints. The environment supports three industrially relevant scenarios: startup, grade change down, and grade change up. It also includes reproducible offline datasets generated from proportional-integral controllers with randomised tunings, providing a benchmark for evaluating offline RL algorithms in realistic process control tasks. We assess behaviour cloning and implicit Q-learning as baseline algorithms, highlighting the challenges offline agents face, including steady-state offsets and degraded performance near setpoints. To address these issues, we propose a novel deployment-time safety layer that performs gradient-based action correction using input convex neural networks (PICNNs) as learned cost models. The PICNN enables real-time, differentiable correction of policy actions by descending a convex, state-conditioned cost surface, without requiring retraining or environment interaction. Experimental results show that offline RL, particularly when combined with convex action correction, can outperform traditional control approaches and maintain stability across all scenarios. These findings demonstrate the feasibility of integrating offline RL with interpretable and safety-aware corrections for high-stakes chemical process control, and lay the groundwork for more reliable data-driven automation in industrial systems.
Problem

Research questions and friction points this paper is trying to address.

Safe control of exothermic reactor using offline RL
Address steady-state offsets in offline RL policies
Deploy RL with real-time convex action correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Input convex neural networks for action correction
Gymnasium-compatible reactor simulation environment
Gradient-based convex cost surface descent