Teacher-Student Diffusion Model for Text-Driven 3D Hand Motion Generation

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-driven 3D hand motion generation methods often overlook fine-grained gestures or rely on explicit 3D object meshes, limiting their generalizability. This work proposes TSHaMo, a novel framework that introduces a teacher-student diffusion mechanism to this task for the first time. The teacher model leverages auxiliary structural signals—such as MANO parameters—to provide guidance, while the student model learns to generate hand motions from text alone. Through collaborative training, the student acquires the ability to produce high-quality motions at inference time without requiring any 3D object input. The approach is compatible with various diffusion backbones and auxiliary signals, and achieves significant improvements in motion quality and diversity on the GRAB and H2O datasets. Ablation studies further confirm the robustness and flexibility of the proposed method.

Technology Category

Application Category

📝 Abstract
Generating realistic 3D hand motion from natural language is vital for VR, robotics, and human-computer interaction. Existing methods either focus on full-body motion, overlooking detailed hand gestures, or require explicit 3D object meshes, limiting generality. We propose TSHaMo, a model-agnostic teacher-student diffusion framework for text-driven hand motion generation. The student model learns to synthesize motions from text alone, while the teacher leverages auxiliary signals (e.g., MANO parameters) to provide structured guidance during training. A co-training strategy enables the student to benefit from the teacher's intermediate predictions while remaining text-only at inference. Evaluated using two diffusion backbones on GRAB and H2O, TSHaMo consistently improves motion quality and diversity. Ablations confirm its robustness and flexibility in using diverse auxiliary inputs without requiring 3D objects at test time.
Problem

Research questions and friction points this paper is trying to address.

text-driven 3D hand motion
realistic hand gestures
3D object mesh dependency
motion generation generality
Innovation

Methods, ideas, or system contributions that make the work stand out.

teacher-student diffusion
text-driven 3D hand motion
model-agnostic framework
auxiliary signal guidance
co-training strategy
🔎 Similar Papers
No similar papers found.