TD-MPC-Opt: Distilling Model-Based Multi-Task Reinforcement Learning Agents

๐Ÿ“… 2025-07-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of deploying large world models in resource-constrained settings for multi-task reinforcement learning, this paper proposes an efficient multi-task knowledge distillation framework. It achieves, for the first time, knowledge transfer from a 317M-parameter teacher model to a 1M-parameter student model, further compressed via FP16 post-training quantization. The method integrates model-based reinforcement learning with cross-task knowledge distillation to preserve policy generalization across tasks. Evaluated on the MT30 benchmark, the distilled model achieves a normalized score of 28.45โ€”significantly outperforming the baseline 1M-parameter model (18.93). After quantization, the model size is reduced by approximately 50%, while maintaining high performance and strong deployment feasibility. This work establishes a scalable technical pathway toward practical, lightweight world models suitable for edge and embedded applications.

Technology Category

Application Category

๐Ÿ“ Abstract
We present a novel approach to knowledge transfer in model-based reinforcement learning, addressing the critical challenge of deploying large world models in resource-constrained environments. Our method efficiently distills a high-capacity multi-task agent (317M parameters) into a compact model (1M parameters) on the MT30 benchmark, significantly improving performance across diverse tasks. Our distilled model achieves a state-of-the-art normalized score of 28.45, surpassing the original 1M parameter model score of 18.93. This improvement demonstrates the ability of our distillation technique to capture and consolidate complex multi-task knowledge. We further optimize the distilled model through FP16 post-training quantization, reducing its size by $sim$50%. Our approach addresses practical deployment limitations and offers insights into knowledge representation in large world models, paving the way for more efficient and accessible multi-task reinforcement learning systems in robotics and other resource-constrained applications. Code available at https://github.com/dmytro-kuzmenko/td-mpc-opt.
Problem

Research questions and friction points this paper is trying to address.

Efficiently distilling large multi-task RL models into compact ones
Improving performance of small models via knowledge distillation
Optimizing model size for resource-constrained deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills large multi-task agent into compact model
Uses FP16 quantization to reduce model size
Achieves state-of-the-art performance on MT30 benchmark