🤖 AI Summary
This work addresses the challenge of high-frequency action oscillations in deep reinforcement learning agents for continuous control tasks, which often lead to excessive energy consumption and severe mechanical wear, thereby hindering real-world deployment. The study proposes a novel smoothness regularization technique by systematically incorporating a third-order derivative (jerk) penalty into the policy optimization objective. This approach significantly enhances action smoothness while preserving task performance. Empirical evaluations across four standard continuous control benchmarks demonstrate a consistent reduction in action variation rates. Notably, when applied to building energy management—specifically HVAC systems—the method reduces equipment switching frequency by up to 60%, substantially improving both the deployability and operational efficiency of learned policies in practical engineering contexts.
📝 Abstract
Deep reinforcement learning agents often exhibit erratic, high-frequency control behaviors that hinder real-world deployment due to excessive energy consumption and mechanical wear. We systematically investigate action smoothness regularization through higher-order derivative penalties, progressing from theoretical understanding in continuous control benchmarks to practical validation in building energy management. Our comprehensive evaluation across four continuous control environments demonstrates that third-order derivative penalties (jerk minimization) consistently achieve superior smoothness while maintaining competitive performance. We extend these findings to HVAC control systems where smooth policies reduce equipment switching by 60%, translating to significant operational benefits. Our work establishes higher-order action regularization as an effective bridge between RL optimization and operational constraints in energy-critical applications.