Distilling Reinforcement Learning into Single-Batch Datasets

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of distilling complex reinforcement learning (RL) policies into compact, supervision-compatible datasets. We propose a cross-paradigm distillation framework that transforms RL environments into minimal, single-batch synthetic supervised learning (SL) datasets. Our method integrates meta-learning extensions of Proximal Policy Optimization (PPO) with dataset distillation techniques, enabling the construction of SL datasets—trained on Cart-Pole, MuJoCo, and Atari benchmarks—that support one-step gradient updates for policy initialization. Key contributions include: (i) the first successful paradigm-transfer distillation from RL to SL; (ii) distilled datasets exhibiting strong generalization across diverse model architectures (MLP, CNN, Transformer) and tasks; and (iii) empirical results showing that merely hundreds of distilled samples recover over 90% of original RL policy performance across multiple benchmarks—substantially reducing training cost and accelerating deployment.

Technology Category

Application Category

📝 Abstract
Dataset distillation compresses a large dataset into a small synthetic dataset such that learning on the synthetic dataset approximates learning on the original. Training on the distilled dataset can be performed in as little as one step of gradient descent. We demonstrate that distillation is generalizable to different tasks by distilling reinforcement learning environments into one-batch supervised learning datasets. This demonstrates not only distillation's ability to compress a reinforcement learning task but also its ability to transform one learning modality (reinforcement learning) into another (supervised learning). We present a novel extension of proximal policy optimization for meta-learning and use it in distillation of a multi-dimensional extension of the classic cart-pole problem, all MuJoCo environments, and several Atari games. We demonstrate distillation's ability to compress complex RL environments into one-step supervised learning, explore RL distillation's generalizability across learner architectures, and demonstrate distilling an environment into the smallest-possible synthetic dataset.
Problem

Research questions and friction points this paper is trying to address.

Compress RL environments into one-batch supervised datasets
Transform reinforcement learning into supervised learning modality
Distill complex tasks into minimal synthetic datasets efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compress RL into one-batch supervised learning
Extend proximal policy optimization for meta-learning
Transform RL tasks into smallest synthetic datasets
🔎 Similar Papers
No similar papers found.