DmC: Nearest Neighbor Guidance Diffusion Model for Offline Cross-domain Reinforcement Learning

πŸ“… 2025-07-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Cross-domain offline reinforcement learning faces two key challenges when target-domain data is scarce: (1) source-target dataset size imbalance causes neural-network-based domain gap estimators to overfit, and (2) only partial alignment exists between source and target domains. To address these, we propose DmCβ€”the first framework to incorporate k-nearest-neighbor (k-NN) nonparametric estimation into cross-domain offline RL. DmC robustly models domain proximity without overfitting by leveraging local neighborhood statistics. Building on this, we design a neighbor-guided diffusion mechanism that selectively synthesizes high-fidelity source samples aligned with the target domain, thereby enhancing policy training. Evaluated on MuJoCo multi-task benchmarks, DmC significantly outperforms state-of-the-art methods, especially under extreme data scarcity (e.g., only 10–50 target trajectories), demonstrating superior robustness and consistent performance gains.

Technology Category

Application Category

πŸ“ Abstract
Cross-domain offline reinforcement learning (RL) seeks to enhance sample efficiency in offline RL by utilizing additional offline source datasets. A key challenge is to identify and utilize source samples that are most relevant to the target domain. Existing approaches address this challenge by measuring domain gaps through domain classifiers, target transition dynamics modeling, or mutual information estimation using contrastive loss. However, these methods often require large target datasets, which is impractical in many real-world scenarios. In this work, we address cross-domain offline RL under a limited target data setting, identifying two primary challenges: (1) Dataset imbalance, which is caused by large source and small target datasets and leads to overfitting in neural network-based domain gap estimators, resulting in uninformative measurements; and (2) Partial domain overlap, where only a subset of the source data is closely aligned with the target domain. To overcome these issues, we propose DmC, a novel framework for cross-domain offline RL with limited target samples. Specifically, DmC utilizes $k$-nearest neighbor ($k$-NN) based estimation to measure domain proximity without neural network training, effectively mitigating overfitting. Then, by utilizing this domain proximity, we introduce a nearest-neighbor-guided diffusion model to generate additional source samples that are better aligned with the target domain, thus enhancing policy learning with more effective source samples. Through theoretical analysis and extensive experiments in diverse MuJoCo environments, we demonstrate that DmC significantly outperforms state-of-the-art cross-domain offline RL methods, achieving substantial performance gains.
Problem

Research questions and friction points this paper is trying to address.

Enhance offline RL sample efficiency using cross-domain datasets
Measure domain proximity without neural networks to prevent overfitting
Generate target-aligned source samples via nearest-neighbor-guided diffusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses k-NN to measure domain proximity
Nearest-neighbor-guided diffusion model
Generates target-aligned source samples
πŸ”Ž Similar Papers
No similar papers found.
L
Linh Le Pham Van
Applied Artificial Intelligence Initiative, Deakin University
M
Minh Hoang Nguyen
Applied Artificial Intelligence Initiative, Deakin University
Duc Kieu
Duc Kieu
Deakin University
Deep learningDiffusion models
H
Hung Le
Applied Artificial Intelligence Initiative, Deakin University
Hung The Tran
Hung The Tran
AI Center, VNPT Media
Machine LearningOptimizationReinforcement LearningLarge Language Models
S
Sunil Gupta
Applied Artificial Intelligence Initiative, Deakin University