Gesture Generation (Still) Needs Improved Human Evaluation Practices: Insights from a Community-Driven State-of-the-Art Benchmark

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The field of speech-driven 3D gesture generation has long lacked standardized human evaluation protocols, hindering fair cross-model comparison. To address this, we propose the first decoupled two-dimensional evaluation framework—separately assessing *motion naturalness* and *speech-gesture alignment*—built upon the BEAT2 dataset. Leveraging a large-scale crowdsourced preference study (16,000+ votes), we systematically benchmark six state-of-the-art models. Our findings reveal that most recent models offer no statistically significant improvement over baselines, and several reported performance gains are irreproducible—exposing critical limitations of current approaches under rigorous human evaluation. We open-source the complete benchmark pipeline, including 5 hours of synthesized motion sequences, 750+ rendered videos, and evaluation scripts, enabling fully reproducible user studies without re-implementation. This establishes the first comparable, scalable, and human-centered evaluation standard for speech-driven gesture generation.

Technology Category

Application Category

📝 Abstract
We review human evaluation practices in automated, speech-driven 3D gesture generation and find a lack of standardisation and frequent use of flawed experimental setups. This leads to a situation where it is impossible to know how different methods compare, or what the state of the art is. In order to address common shortcomings of evaluation design, and to standardise future user studies in gesture-generation works, we introduce a detailed human evaluation protocol for the widely-used BEAT2 motion-capture dataset. Using this protocol, we conduct large-scale crowdsourced evaluation to rank six recent gesture-generation models -- each trained by its original authors -- across two key evaluation dimensions: motion realism and speech-gesture alignment. Our results provide strong evidence that 1) newer models do not consistently outperform earlier approaches; 2) published claims of high motion realism or speech-gesture alignment may not hold up under rigorous evaluation; and 3) the field must adopt disentangled assessments of motion quality and multimodal alignment for accurate benchmarking in order to make progress. Finally, in order to drive standardisation and enable new evaluation research, we will release five hours of synthetic motion from the benchmarked models; over 750 rendered video stimuli from the user studies -- enabling new evaluations without model reimplementation required -- alongside our open-source rendering script, and the 16,000 pairwise human preference votes collected for our benchmark.
Problem

Research questions and friction points this paper is trying to address.

Standardizing human evaluation practices for gesture generation models
Assessing motion realism and speech-gesture alignment in generated gestures
Addressing inconsistent performance claims through rigorous benchmarking protocols
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced a human evaluation protocol for gesture generation
Conducted large-scale crowdsourced evaluation of six models
Released synthetic motion data and preference votes for standardization
🔎 Similar Papers
No similar papers found.
R
Rajmund Nagy
KTH Royal Institute of Technology
H
Hendric Voss
Bielefeld University
T
Thanh Hoang-Minh
University of Science – VNUHCM
M
Mihail Tsakov
Independent Researcher
T
Teodor Nikolov
Motorica AB
Z
Zeyi Zhang
Peking University
T
Tenglong Ao
Peking University
Sicheng Yang
Sicheng Yang
Tencent Robotics X
Robot
Shaoli Huang
Shaoli Huang
Tencent AI-Lab
Deep learningComputer Vision
Yongkang Cheng
Yongkang Cheng
Mohamed bin Zayed University of Artificial Intelligence
Motion CaptureMotion GenerationEmbodied AI
M. Hamza Mughal
M. Hamza Mughal
Max Planck Institute for Informatics, Saarland Informatics Campus
Computer VisionMulti-modal Machine LearningVision and Language
Rishabh Dabral
Rishabh Dabral
Max-Planck Institute for Informatics
Computer VisionDeep LearningHuman Pose Estimation
Kiran Chhatre
Kiran Chhatre
KTH Royal Institute of Technology
Computer VisionMachine LearningComputer Graphics
Christian Theobalt
Christian Theobalt
Professor, Max Planck Institute for Informatics, Saarland Informatics Campus, Saarland University
Computer GraphicsComputer VisionAI & Machine LearningHCIVirtual/Augmented Reality
L
Libin Liu
Peking University
Stefan Kopp
Stefan Kopp
Bielefeld University, CITEC
Artificial IntelligenceCognitive ScienceSocially Interactive AgentsArtificial Social IntelligenceConversational Agents
Rachel McDonnell
Rachel McDonnell
Professor, Trinity College Dublin
Computer GraphicsMotion CaptureComputer AnimationVisual PerceptionVirtual Reality
Michael Neff
Michael Neff
University of California, Davis
Taras Kucherenko
Taras Kucherenko
SEED – Electronic Arts
Youngwoo Yoon
Youngwoo Yoon
ETRI
Human-Robot InteractionHuman-Computer InteractionMachine Learning
Gustav Eje Henter
Gustav Eje Henter
KTH Royal Institute of Technology, Stockholm, Sweden
speech synthesischaracter animationprobabilistic modellingsynthesis tasksGenAI