EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild

πŸ“… 2025-02-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the critical challenge of β€œwhen to initiate speech” in real-world, first-person streaming video. We propose the first end-to-end framework for egocentric speech initiation timing prediction. Methodologically, we design a multimodal temporal online learning architecture that integrates real-time RGB visual feature extraction, context-aware attention, and sliding-window streaming inference; it is pre-trained on our large-scale in-the-wild dialogue dataset, YT-Conversation, enabling uncropped, full-temporal video understanding. Our key contribution is the first unified modeling of egocentric visual perception and dynamic speech initiation decision-making. Experiments on the EasyCom and Ego4D benchmarks demonstrate significant improvements over random and silence baselines. Ablation studies confirm the essential roles of multimodal input and context length in prediction accuracy.

Technology Category

Application Category

πŸ“ Abstract
Predicting when to initiate speech in real-world environments remains a fundamental challenge for conversational agents. We introduce EgoSpeak, a novel framework for real-time speech initiation prediction in egocentric streaming video. By modeling the conversation from the speaker's first-person viewpoint, EgoSpeak is tailored for human-like interactions in which a conversational agent must continuously observe its environment and dynamically decide when to talk. Our approach bridges the gap between simplified experimental setups and complex natural conversations by integrating four key capabilities: (1) first-person perspective, (2) RGB processing, (3) online processing, and (4) untrimmed video processing. We also present YT-Conversation, a diverse collection of in-the-wild conversational videos from YouTube, as a resource for large-scale pretraining. Experiments on EasyCom and Ego4D demonstrate that EgoSpeak outperforms random and silence-based baselines in real time. Our results also highlight the importance of multimodal input and context length in effectively deciding when to speak.
Problem

Research questions and friction points this paper is trying to address.

Predicting speech initiation in real-time
Enhancing human-like conversational interactions
Integrating multimodal input for decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

First-person perspective modeling
Real-time RGB video processing
Untrimmed video analysis
πŸ”Ž Similar Papers
No similar papers found.
J
Junhyeok Kim
Yonsei University
Min Soo Kim
Min Soo Kim
Yonsei University
NLPLLMAI agentsSocial AIExplainable AI
Jiwan Chung
Jiwan Chung
Yonsei University
Computer VisionNLPMultimodal Learning
J
Jungbin Cho
Yonsei University
J
Jisoo Kim
Yonsei University
Sungwoong Kim
Sungwoong Kim
Associate Professor, Korea University
artificial general intelligence
G
Gyeongbo Sim
Multimodal AI Lab., NC Research, NCSOFT Corporation
Y
Youngjae Yu
Yonsei University