🤖 AI Summary
Lifelong imitation learning suffers from distributional shift and catastrophic forgetting induced by sequentially introduced tasks. Existing approaches rely on unsupervised skill discovery or unimodal distillation, failing to ensure latent-space consistency and policy scalability. This paper proposes the first multimodal incremental imitation learning framework tailored for manipulation tasks. It enforces joint latent-space consistency across visual, linguistic, and action modalities, while a multimodal distillation mechanism concurrently regularizes representation drift in all three modalities. Additionally, Gaussian Mixture Model (GMM)-based policy distribution alignment is introduced to preserve latent-space unity and policy stability throughout lifelong learning. Evaluated on the full LIBERO benchmark suite—LIBERO-OBJECT, LIBERO-GOAL, and LIBERO-SPATIAL—the framework comprehensively surpasses state-of-the-art methods, achieving significant gains in task generalization, skill retention, and policy robustness.
📝 Abstract
Lifelong imitation learning for manipulation tasks poses significant challenges due to distribution shifts that occur in incremental learning steps. Existing methods often focus on unsupervised skill discovery to construct an ever-growing skill library or distillation from multiple policies, which can lead to scalability issues as diverse manipulation tasks are continually introduced and may fail to ensure a consistent latent space throughout the learning process, leading to catastrophic forgetting of previously learned skills. In this paper, we introduce M2Distill, a multi-modal distillation-based method for lifelong imitation learning focusing on preserving consistent latent space across vision, language, and action distributions throughout the learning process. By regulating the shifts in latent representations across different modalities from previous to current steps, and reducing discrepancies in Gaussian Mixture Model (GMM) policies between consecutive learning steps, we ensure that the learned policy retains its ability to perform previously learned tasks while seamlessly integrating new skills. Extensive evaluations on the LIBERO lifelong imitation learning benchmark suites, including LIBERO-OBJECT, LIBERO-GOAL, and LIBERO-SPATIAL, demonstrate that our method consistently outperforms prior state-of-the-art methods across all evaluated metrics.