Zero-Shot Policy Transfer in Reinforcement Learning using Buckingham's Pi Theorem

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) policies exhibit poor generalization across robots with disparate configurations, tasks, or physical parameters, hindering real-world deployment. To address this, we propose a dimensionality-analysis-based zero-shot policy transfer method: leveraging the Buckingham Pi theorem, we map policy inputs (states) and outputs (actions) into a dimensionless space, enabling direct cross-system policy reuse without fine-tuning. We validate our approach on simulated and physical pendulums, as well as the HalfCheetah robot—systems spanning multiple dynamical scales. Our method preserves policy performance under dynamic similarity and significantly outperforms direct transfer baselines in dynamically dissimilar settings. Crucially, this work introduces dimensional analysis—grounded in physical principles—as a systematic framework for RL policy transfer. It establishes the first physically interpretable, theoretically principled, and broadly applicable approach to zero-shot generalization in RL, bridging control theory and deep reinforcement learning.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) policies often fail to generalize to new robots, tasks, or environments with different physical parameters, a challenge that limits their real-world applicability. This paper presents a simple, zero-shot transfer method based on Buckingham's Pi Theorem to address this limitation. The method adapts a pre-trained policy to new system contexts by scaling its inputs (observations) and outputs (actions) through a dimensionless space, requiring no retraining. The approach is evaluated against a naive transfer baseline across three environments of increasing complexity: a simulated pendulum, a physical pendulum for sim-to-real validation, and the high-dimensional HalfCheetah. Results demonstrate that the scaled transfer exhibits no loss of performance on dynamically similar contexts. Furthermore, on non-similar contexts, the scaled policy consistently outperforms the naive transfer, significantly expanding the volume of contexts where the original policy remains effective. These findings demonstrate that dimensional analysis provides a powerful and practical tool to enhance the robustness and generalization of RL policies.
Problem

Research questions and friction points this paper is trying to address.

Transferring RL policies across robots without retraining
Addressing generalization failure in varying physical parameters
Enabling zero-shot policy adaptation using dimensional analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Buckingham Pi Theorem for policy transfer
Scales policy inputs and outputs without retraining
Enables zero-shot transfer across different physical contexts
🔎 Similar Papers
No similar papers found.
F
Francisco Pascoa
Department of Mechanical Engineering, Université de Sherbrooke, Qc, Canada
I
Ian Lalonde
Department of Mechanical Engineering, Université de Sherbrooke, Qc, Canada
Alexandre Girard
Alexandre Girard
Université de Sherbrooke
Design and Control of Robotic SystemsActuator TechnologiesLearning