DYMO-Hair: Generalizable Volumetric Dynamics Modeling for Robot Hair Manipulation

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of autonomous hair styling under fine-grained structural complexity and highly dynamic deformations, this paper proposes the first robot-based hairstyle shaping framework for open-world scenarios. Methodologically: (1) we introduce an action-conditioned latent-space state editing mechanism, integrating a compact, large-scale pre-trained 3D hairstyle latent space with a learned latent dynamics model; (2) leveraging our in-house hair physics simulator for synthetic data generation, we deploy an MPPI-based planner to enable vision-guided closed-loop control. Our contributions include the first high-generalization dynamical modeling of unseen hairstyles and zero-shot transfer capability. In simulation, our method reduces local deformation error by 22% and improves task success rate by 42%. On real synthetic wigs, it robustly achieves complex styling tasks, outperforming the current state-of-the-art system.

Technology Category

Application Category

📝 Abstract
Hair care is an essential daily activity, yet it remains inaccessible to individuals with limited mobility and challenging for autonomous robot systems due to the fine-grained physical structure and complex dynamics of hair. In this work, we present DYMO-Hair, a model-based robot hair care system. We introduce a novel dynamics learning paradigm that is suited for volumetric quantities such as hair, relying on an action-conditioned latent state editing mechanism, coupled with a compact 3D latent space of diverse hairstyles to improve generalizability. This latent space is pre-trained at scale using a novel hair physics simulator, enabling generalization across previously unseen hairstyles. Using the dynamics model with a Model Predictive Path Integral (MPPI) planner, DYMO-Hair is able to perform visual goal-conditioned hair styling. Experiments in simulation demonstrate that DYMO-Hair's dynamics model outperforms baselines on capturing local deformation for diverse, unseen hairstyles. DYMO-Hair further outperforms baselines in closed-loop hair styling tasks on unseen hairstyles, with an average of 22% lower final geometric error and 42% higher success rate than the state-of-the-art system. Real-world experiments exhibit zero-shot transferability of our system to wigs, achieving consistent success on challenging unseen hairstyles where the state-of-the-art system fails. Together, these results introduce a foundation for model-based robot hair care, advancing toward more generalizable, flexible, and accessible robot hair styling in unconstrained physical environments. More details are available on our project page: https://chengyzhao.github.io/DYMOHair-web/.
Problem

Research questions and friction points this paper is trying to address.

Modeling complex hair dynamics for robot manipulation tasks
Enabling robots to perform hair styling on unseen hairstyles
Improving accessibility of hair care for mobility-limited individuals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action-conditioned latent state editing for dynamics
Compact 3D latent space for hairstyle generalization
Model Predictive Path Integral planner for styling
🔎 Similar Papers
No similar papers found.