🤖 AI Summary
Reinforcement learning (RL) policies exhibit poor generalization across robots with disparate configurations, tasks, or physical parameters, hindering real-world deployment. To address this, we propose a dimensionality-analysis-based zero-shot policy transfer method: leveraging the Buckingham Pi theorem, we map policy inputs (states) and outputs (actions) into a dimensionless space, enabling direct cross-system policy reuse without fine-tuning. We validate our approach on simulated and physical pendulums, as well as the HalfCheetah robot—systems spanning multiple dynamical scales. Our method preserves policy performance under dynamic similarity and significantly outperforms direct transfer baselines in dynamically dissimilar settings. Crucially, this work introduces dimensional analysis—grounded in physical principles—as a systematic framework for RL policy transfer. It establishes the first physically interpretable, theoretically principled, and broadly applicable approach to zero-shot generalization in RL, bridging control theory and deep reinforcement learning.
📝 Abstract
Reinforcement learning (RL) policies often fail to generalize to new robots, tasks, or environments with different physical parameters, a challenge that limits their real-world applicability. This paper presents a simple, zero-shot transfer method based on Buckingham's Pi Theorem to address this limitation. The method adapts a pre-trained policy to new system contexts by scaling its inputs (observations) and outputs (actions) through a dimensionless space, requiring no retraining. The approach is evaluated against a naive transfer baseline across three environments of increasing complexity: a simulated pendulum, a physical pendulum for sim-to-real validation, and the high-dimensional HalfCheetah. Results demonstrate that the scaled transfer exhibits no loss of performance on dynamically similar contexts. Furthermore, on non-similar contexts, the scaled policy consistently outperforms the naive transfer, significantly expanding the volume of contexts where the original policy remains effective. These findings demonstrate that dimensional analysis provides a powerful and practical tool to enhance the robustness and generalization of RL policies.