🤖 AI Summary
To address performance degradation of reinforcement learning policies under hardware deployment caused by uncertainties in environmental dynamics, this paper proposes PPO-PGDLC: a novel variant of Proximal Policy Optimization that systematically imposes Lipschitz regularization exclusively on the critic network and couples it with Projected Gradient Descent (PGD) to optimize adversarial perturbations. This joint design approximates a robust Bellman operator, thereby enhancing policy smoothness and robustness without explicit actuator regularization. The method retains theoretical interpretability and implementation simplicity. Evaluated on two canonical control benchmarks and a real-robot locomotion task, PPO-PGDLC consistently outperforms multiple baselines—demonstrating superior policy performance, improved action smoothness, and enhanced robustness against dynamical perturbations.
📝 Abstract
Uncertainties in transition dynamics pose a critical challenge in reinforcement learning (RL), often resulting in performance degradation of trained policies when deployed on hardware. Many robust RL approaches follow two strategies: enforcing smoothness in actor or actor-critic modules with Lipschitz regularization, or learning robust Bellman operators. However, the first strategy does not investigate the impact of critic-only Lipschitz regularization on policy robustness, while the second lacks comprehensive validation in real-world scenarios. Building on this gap and prior work, we propose PPO-PGDLC, an algorithm based on Proximal Policy Optimization (PPO) that integrates Projected Gradient Descent (PGD) with a Lipschitz-regularized critic (LC). The PGD component calculates the adversarial state within an uncertainty set to approximate the robust Bellman operator, and the Lipschitz-regularized critic further improves the smoothness of learned policies. Experimental results on two classic control tasks and one real-world robotic locomotion task demonstrates that, compared to several baseline algorithms, PPO-PGDLC achieves better performance and predicts smoother actions under environmental perturbations.