π€ AI Summary
Cross-domain offline reinforcement learning faces two key challenges when target-domain data is scarce: (1) source-target dataset size imbalance causes neural-network-based domain gap estimators to overfit, and (2) only partial alignment exists between source and target domains. To address these, we propose DmCβthe first framework to incorporate k-nearest-neighbor (k-NN) nonparametric estimation into cross-domain offline RL. DmC robustly models domain proximity without overfitting by leveraging local neighborhood statistics. Building on this, we design a neighbor-guided diffusion mechanism that selectively synthesizes high-fidelity source samples aligned with the target domain, thereby enhancing policy training. Evaluated on MuJoCo multi-task benchmarks, DmC significantly outperforms state-of-the-art methods, especially under extreme data scarcity (e.g., only 10β50 target trajectories), demonstrating superior robustness and consistent performance gains.
π Abstract
Cross-domain offline reinforcement learning (RL) seeks to enhance sample efficiency in offline RL by utilizing additional offline source datasets. A key challenge is to identify and utilize source samples that are most relevant to the target domain. Existing approaches address this challenge by measuring domain gaps through domain classifiers, target transition dynamics modeling, or mutual information estimation using contrastive loss. However, these methods often require large target datasets, which is impractical in many real-world scenarios. In this work, we address cross-domain offline RL under a limited target data setting, identifying two primary challenges: (1) Dataset imbalance, which is caused by large source and small target datasets and leads to overfitting in neural network-based domain gap estimators, resulting in uninformative measurements; and (2) Partial domain overlap, where only a subset of the source data is closely aligned with the target domain. To overcome these issues, we propose DmC, a novel framework for cross-domain offline RL with limited target samples. Specifically, DmC utilizes $k$-nearest neighbor ($k$-NN) based estimation to measure domain proximity without neural network training, effectively mitigating overfitting. Then, by utilizing this domain proximity, we introduce a nearest-neighbor-guided diffusion model to generate additional source samples that are better aligned with the target domain, thus enhancing policy learning with more effective source samples. Through theoretical analysis and extensive experiments in diverse MuJoCo environments, we demonstrate that DmC significantly outperforms state-of-the-art cross-domain offline RL methods, achieving substantial performance gains.