Cross-Domain Offline Policy Adaptation via Selective Transition Correction

📅 2026-02-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation in cross-domain offline reinforcement learning caused by dynamic mismatches between source and target domains. To mitigate this issue, the authors propose Selective Transfer Correction (STC), a method that selectively corrects actions and rewards in the source-domain data using an inverse policy model and a reward model to explicitly align with the target-domain dynamics. Furthermore, a forward dynamics model is employed to filter high-quality corrected samples, thereby enhancing the reliability of the correction process. Experimental results across multiple environments with significant dynamic shifts demonstrate that STC substantially outperforms existing baselines, effectively improving the transfer performance of policies across domains.

Technology Category

Application Category

📝 Abstract
It remains a critical challenge to adapt policies across domains with mismatched dynamics in reinforcement learning (RL). In this paper, we study cross-domain offline RL, where an offline dataset from another similar source domain can be accessed to enhance policy learning upon a target domain dataset. Directly merging the two datasets may lead to suboptimal performance due to potential dynamics mismatches. Existing approaches typically mitigate this issue through source domain transition filtering or reward modification, which, however, may lead to insufficient exploitation of the valuable source domain data. Instead, we propose to modify the source domain data into the target domain data. To that end, we leverage an inverse policy model and a reward model to correct the actions and rewards of source transitions, explicitly achieving alignment with the target dynamics. Since limited data may result in inaccurate model training, we further employ a forward dynamics model to retain corrected samples that better match the target dynamics than the original transitions. Consequently, we propose the Selective Transition Correction (STC) algorithm, which enables reliable usage of source domain data for policy adaptation. Experiments on various environments with dynamics shifts demonstrate that STC achieves superior performance against existing baselines.
Problem

Research questions and friction points this paper is trying to address.

cross-domain
offline reinforcement learning
dynamics mismatch
policy adaptation
domain adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective Transition Correction
Cross-Domain Offline RL
Inverse Policy Model
Dynamics Alignment
Forward Dynamics Model
🔎 Similar Papers
No similar papers found.