🤖 AI Summary
Dialogue turn-taking prediction remains challenging due to the need for fine-grained, temporally sensitive multimodal modeling, particularly in multi-party settings where nonverbal cues like gestures carry critical turn-related intent.
Method: This work proposes a semantics-aware, fine-grained turn-taking prediction framework integrating semantic gestures. We extend the DnD Gesture corpus with 2,663 new fine-grained semantic gesture annotations—creating the first multi-participant dialogue dataset supporting semantic–modality alignment. We introduce a novel semantic-guided gesture representation that explicitly encodes turn-taking intention, and employ a Mixture-of-Experts architecture to jointly fuse textual, acoustic, and semantic gesture features.
Contribution/Results: Experiments demonstrate significant improvements over unimodal and conventional multimodal baselines on turn-taking prediction. Results validate the essential complementary role of semantic gestures in modeling time-critical conversational behavior, establishing a new paradigm for multimodal conversational intelligence.
📝 Abstract
In conversation, humans use multimodal cues, such as speech, gestures, and gaze, to manage turn-taking. While linguistic and acoustic features are informative, gestures provide complementary cues for modeling these transitions. To study this, we introduce DnD Gesture++, an extension of the multi-party DnD Gesture corpus enriched with 2,663 semantic gesture annotations spanning iconic, metaphoric, deictic, and discourse types. Using this dataset, we model turn-taking prediction through a Mixture-of-Experts framework integrating text, audio, and gestures. Experiments show that incorporating semantically guided gestures yields consistent performance gains over baselines, demonstrating their complementary role in multimodal turn-taking.