Higher-Order Action Regularization in Deep Reinforcement Learning: From Continuous Control to Building Energy Management

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of high-frequency action oscillations in deep reinforcement learning agents for continuous control tasks, which often lead to excessive energy consumption and severe mechanical wear, thereby hindering real-world deployment. The study proposes a novel smoothness regularization technique by systematically incorporating a third-order derivative (jerk) penalty into the policy optimization objective. This approach significantly enhances action smoothness while preserving task performance. Empirical evaluations across four standard continuous control benchmarks demonstrate a consistent reduction in action variation rates. Notably, when applied to building energy management—specifically HVAC systems—the method reduces equipment switching frequency by up to 60%, substantially improving both the deployability and operational efficiency of learned policies in practical engineering contexts.

Technology Category

Application Category

📝 Abstract
Deep reinforcement learning agents often exhibit erratic, high-frequency control behaviors that hinder real-world deployment due to excessive energy consumption and mechanical wear. We systematically investigate action smoothness regularization through higher-order derivative penalties, progressing from theoretical understanding in continuous control benchmarks to practical validation in building energy management. Our comprehensive evaluation across four continuous control environments demonstrates that third-order derivative penalties (jerk minimization) consistently achieve superior smoothness while maintaining competitive performance. We extend these findings to HVAC control systems where smooth policies reduce equipment switching by 60%, translating to significant operational benefits. Our work establishes higher-order action regularization as an effective bridge between RL optimization and operational constraints in energy-critical applications.
Problem

Research questions and friction points this paper is trying to address.

deep reinforcement learning
action smoothness
high-frequency control
energy management
mechanical wear
Innovation

Methods, ideas, or system contributions that make the work stand out.

higher-order regularization
action smoothness
jerk minimization
deep reinforcement learning
building energy management
🔎 Similar Papers
No similar papers found.