🤖 AI Summary
Diffusion-based trajectory planning incurs high computational overhead during inference, making it challenging to meet the low-latency requirements of real-time control, and often suffers from insufficient action diversity due to distribution-matching objectives. To address these limitations, this work proposes a single-step multimodal trajectory generation method that amortizes the iterative denoising process during training by introducing a condition-aware key-space distance metric and a Key-Driven Drift Field (KDP) objective. The approach integrates stop-gradient drift, key-space similarity measurement, and a repulsion mechanism to enable efficient training within an offline reinforcement learning framework. Experiments demonstrate that the method achieves significantly reduced planning latency with only a single inference step—on both standard RL benchmarks and real hardware—while maintaining or even surpassing the performance and behavioral diversity of diffusion models.
📝 Abstract
Diffusion-based trajectory planners can synthesize rich, multimodal action sequences for offline reinforcement learning, but their iterative denoising incurs substantial inference-time cost, making closed-loop planning slow under tight compute budgets. We study the problem of achieving diffusion-like trajectory planning behavior with one-step inference, while retaining the ability to sample diverse candidate plans and condition on the current state in a receding-horizon control loop. Our key observation is that conditional trajectory generation fails under naïve distribution-matching objectives when the similarity measure used to align generated trajectories with the dataset is dominated by unconstrained future dimensions. In practice, this causes attraction toward average trajectories, collapses action diversity, and yields near-static behavior. Our key insight is that conditional generative planning requires a conditioning-aware notion of neighborhood: trajectory updates should be computed using distances in a compact key space that reflects the condition, while still applying updates in the full trajectory space. Building on this, we introduce Keyed Drifting Policies (KDP), a one-step trajectory generator trained with a drift-field objective that attracts generated trajectories toward condition-matched dataset windows and repels them from nearby generated samples, using a stop-gradient drifted target to amortize iterative refinement into training. At inference, the resulting policy produces a full trajectory window in a single forward pass. Across standard RL benchmarks and real-time hardware deployments, KDP achieves strong performance with one-step inference and substantially lower planning latency than diffusion sampling. Project website, code and videos: https://keyed-drifting.github.io/