Composite Flow Matching for Reinforcement Learning with Shifted-Dynamics Data

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline reinforcement learning suffers from poor generalization when source and target environments exhibit dynamics mismatch. Method: This paper proposes a composite flow matching framework that models the target dynamics as a conditional flow over the source-domain flow, and—novelly—integrates optimal transport theory into flow matching to principledly quantify and reduce dynamics discrepancy via the Wasserstein distance. Furthermore, it introduces an optimistic active sampling strategy that prioritizes exploration in regions of high dynamics divergence, with theoretical guarantees on policy performance improvement. Contribution/Results: Experiments demonstrate that the method significantly outperforms strong baselines across multiple RL benchmarks with dynamics shifts, achieving substantial gains in sample efficiency and policy convergence speed.

Technology Category

Application Category

📝 Abstract
Incorporating pre-collected offline data from a source environment can significantly improve the sample efficiency of reinforcement learning (RL), but this benefit is often challenged by discrepancies between the transition dynamics of the source and target environments. Existing methods typically address this issue by penalizing or filtering out source transitions in high dynamics-gap regions. However, their estimation of the dynamics gap often relies on KL divergence or mutual information, which can be ill-defined when the source and target dynamics have disjoint support. To overcome these limitations, we propose CompFlow, a method grounded in the theoretical connection between flow matching and optimal transport. Specifically, we model the target dynamics as a conditional flow built upon the output distribution of the source-domain flow, rather than learning it directly from a Gaussian prior. This composite structure offers two key advantages: (1) improved generalization for learning target dynamics, and (2) a principled estimation of the dynamics gap via the Wasserstein distance between source and target transitions. Leveraging our principled estimation of the dynamics gap, we further introduce an optimistic active data collection strategy that prioritizes exploration in regions of high dynamics gap, and theoretically prove that it reduces the performance disparity with the optimal policy. Empirically, CompFlow outperforms strong baselines across several RL benchmarks with shifted dynamics.
Problem

Research questions and friction points this paper is trying to address.

Addressing dynamics discrepancy between source and target environments in RL
Improving generalization for learning target dynamics via flow matching
Optimizing data collection in high dynamics-gap regions for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses flow matching for shifted dynamics
Models target dynamics via source flow
Estimates dynamics gap with Wasserstein distance
🔎 Similar Papers
No similar papers found.