🤖 AI Summary
This work addresses negative transfer in cross-domain reinforcement learning, which often arises from mismatches between the state or action spaces of source and target domains. To mitigate this issue, the authors propose a Cross-Domain Bellman Consistency metric to evaluate transferability and introduce QAvatar, an adaptive hybrid critic that dynamically blends source and target domain Q-functions without requiring additional hyperparameters. This approach effectively alleviates spatial heterogeneity and prevents negative transfer, enabling stable and efficient policy transfer across domains. Empirical evaluations on benchmark tasks—including locomotion control and robotic manipulation—demonstrate that the proposed framework significantly outperforms existing methods in both transfer performance and robustness, with theoretical analysis further supporting its efficacy.
📝 Abstract
Cross-domain reinforcement learning (CDRL) is meant to improve the data efficiency of RL by leveraging the data samples collected from a source domain to facilitate the learning in a similar target domain. Despite its potential, cross-domain transfer in RL is known to have two fundamental and intertwined challenges: (i) The source and target domains can have distinct state space or action space, and this makes direct transfer infeasible and thereby requires more sophisticated inter-domain mappings; (ii) The transferability of a source-domain model in RL is not easily identifiable a priori, and hence CDRL can be prone to negative effect during transfer. In this paper, we propose to jointly tackle these two challenges through the lens of \textit{cross-domain Bellman consistency} and \textit{hybrid critic}. Specifically, we first introduce the notion of cross-domain Bellman consistency as a way to measure transferability of a source-domain model. Then, we propose $Q$Avatar, which combines the Q functions from both the source and target domains with an adaptive hyperparameter-free weight function. Through this design, we characterize the convergence behavior of $Q$Avatar and show that $Q$Avatar achieves reliable transfer in the sense that it effectively leverages a source-domain Q function for knowledge transfer to the target domain. Through experiments, we demonstrate that $Q$Avatar achieves favorable transferability across various RL benchmark tasks, including locomotion and robot arm manipulation. Our code is available at https://rl-bandits-lab.github.io/Cross-Domain-RL/.