DynaMimicGen: A Data Generation Framework for Robot Learning of Dynamic Tasks

📅 2025-11-20
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Robot manipulation policy learning in dynamic environments heavily relies on large-scale human demonstrations, incurring high data collection costs. Method: This paper proposes a novel method that generates high-quality training data from minimal human demonstrations. Its core innovation is the first integration of Dynamic Movement Primitives (DMPs) with a subtask segmentation mechanism, enabling real-time, adaptive generalization to object pose, robot state, and scene geometry variations. The approach unifies behavior cloning, trajectory generation, and hierarchical task decomposition to support data-efficient cross-scene generalization. Contribution/Results: Evaluated on long-horizon, high-contact dynamic tasks—including cube stacking and cup insertion into drawers—the method significantly reduces dependence on human demonstrations while improving policy robustness and generalization capability across diverse scenarios.

Technology Category

Application Category

📝 Abstract
Learning robust manipulation policies typically requires large and diverse datasets, the collection of which is time-consuming, labor-intensive, and often impractical for dynamic environments. In this work, we introduce DynaMimicGen (D-MG), a scalable dataset generation framework that enables policy training from minimal human supervision while uniquely supporting dynamic task settings. Given only a few human demonstrations, D-MG first segments the demonstrations into meaningful sub-tasks, then leverages Dynamic Movement Primitives (DMPs) to adapt and generalize the demonstrated behaviors to novel and dynamically changing environments. Improving prior methods that rely on static assumptions or simplistic trajectory interpolation, D-MG produces smooth, realistic, and task-consistent Cartesian trajectories that adapt in real time to changes in object poses, robot states, or scene geometry during task execution. Our method supports different scenarios - including scene layouts, object instances, and robot configurations - making it suitable for both static and highly dynamic manipulation tasks. We show that robot agents trained via imitation learning on D-MG-generated data achieve strong performance across long-horizon and contact-rich benchmarks, including tasks like cube stacking and placing mugs in drawers, even under unpredictable environment changes. By eliminating the need for extensive human demonstrations and enabling generalization in dynamic settings, D-MG offers a powerful and efficient alternative to manual data collection, paving the way toward scalable, autonomous robot learning.
Problem

Research questions and friction points this paper is trying to address.

Generates robot training data for dynamic tasks with minimal human demonstrations
Adapts segmented human demonstrations to novel and changing environments
Produces realistic trajectories that respond to real-time environmental changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework segments demonstrations into sub-tasks
Uses Dynamic Movement Primitives to adapt behaviors
Generates smooth Cartesian trajectories for dynamic environments
🔎 Similar Papers
No similar papers found.
V
Vincenzo Pomponi
Institute of Systems and Technologies for Sustainable Production (ISTePS), Department of Innovative Technologies (DTI), University of Applied Science and Arts of Southern Switzerland (SUPSI), Lugano, Switzerland
P
Paolo Franceschi
Istituto Dalle Molle di studi sull’intelligenza artificiale (IDSIA), Department of Innovative Technologies (DTI), University of Applied Science and Arts of Southern Switzerland (SUPSI), Lugano, Switzerland
Stefano Baraldo
Stefano Baraldo
SUPSI - University of Applied Sciences of Southern Switzerland
Industrial RoboticsMetal Additive ManufacturingMachine Learning
L
Loris Roveda
Istituto Dalle Molle di studi sull’intelligenza artificiale (IDSIA), Department of Innovative Technologies (DTI), University of Applied Science and Arts of Southern Switzerland (SUPSI), Lugano, Switzerland; Department of Mechanical Engineering, Politecnico di Milano (PoliMi), Milan, Italy
O
Oliver Avram
Institute of Systems and Technologies for Sustainable Production (ISTePS), Department of Innovative Technologies (DTI), University of Applied Science and Arts of Southern Switzerland (SUPSI), Lugano, Switzerland
L
Luca Maria Gambardella
Istituto Dalle Molle di studi sull’intelligenza artificiale (IDSIA), Faculty of Informatics, Università della Svizzera Italiana (USI), Lugano, Switzerland
Anna Valente
Anna Valente
SUPSI