Enabling Chatbots with Eyes and Ears: An Immersive Multimodal Conversation System for Dynamic Interactions

πŸ“… 2025-05-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current multimodal chatbots predominantly focus on static visual interaction, neglecting auditory modality integration and support for dynamic, multi-party, multi-turn natural dialogue. To address this gap, we propose the first end-to-end immersive multimodal dialogue framework endowed with both β€œeyes” (vision) and β€œears” (audio). Our method introduces a novel cross-modal memory retrieval mechanism and an audio-visual alignment modeling approach to enable coherent, context-aware multimodal understanding and generation. Furthermore, we construct MΒ³Cβ€”the first benchmark dataset supporting multi-session, multi-role, and multimodal-fused conversational scenarios. Extensive experiments demonstrate that our system achieves long-horizon coherent responses in complex dynamic environments. Human evaluations confirm significant improvements over state-of-the-art baselines in dialogue naturalness, state consistency, and cross-modal coordination.

Technology Category

Application Category

πŸ“ Abstract
As chatbots continue to evolve toward human-like, real-world, interactions, multimodality remains an active area of research and exploration. So far, efforts to integrate multimodality into chatbots have primarily focused on image-centric tasks, such as visual dialogue and image-based instructions, placing emphasis on the"eyes"of human perception while neglecting the"ears", namely auditory aspects. Moreover, these studies often center around static interactions that focus on discussing the modality rather than naturally incorporating it into the conversation, which limits the richness of simultaneous, dynamic engagement. Furthermore, while multimodality has been explored in multi-party and multi-session conversations, task-specific constraints have hindered its seamless integration into dynamic, natural conversations. To address these challenges, this study aims to equip chatbots with"eyes and ears"capable of more immersive interactions with humans. As part of this effort, we introduce a new multimodal conversation dataset, Multimodal Multi-Session Multi-Party Conversation ($M^3C$), and propose a novel multimodal conversation model featuring multimodal memory retrieval. Our model, trained on the $M^3C$, demonstrates the ability to seamlessly engage in long-term conversations with multiple speakers in complex, real-world-like settings, effectively processing visual and auditory inputs to understand and respond appropriately. Human evaluations highlight the model's strong performance in maintaining coherent and dynamic interactions, demonstrating its potential for advanced multimodal conversational agents.
Problem

Research questions and friction points this paper is trying to address.

Enhancing chatbots with visual and auditory capabilities for immersive interactions
Addressing limitations in static, modality-focused chatbot conversations
Enabling seamless multimodal integration in dynamic, multi-party dialogues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal memory retrieval for dynamic conversations
Integration of visual and auditory inputs
Long-term multi-party conversational ability
πŸ”Ž Similar Papers
No similar papers found.