Ensembling Synchronisation-based and Face-Voice Association Paradigms for Robust Active Speaker Detection in Egocentric Recordings

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of audio-visual active speaker detection (ASD) in first-person videos—particularly under occlusion, motion blur, and audio interference—this paper proposes a dual-paradigm fusion framework. The method jointly integrates temporal audio-visual synchronization modeling (e.g., TalkNet) and cross-modal facial-voice biometric correlation (e.g., Light-ASD), combined via weighted averaging to achieve complementary integration. Additionally, the FVA preprocessing pipeline is optimized for enhanced robustness. This design synergistically leverages the temporal sensitivity of synchronization modeling and the resilience of biometric matching to transient visual degradation, effectively mitigating the impact of overlapping speech and front-end segmentation errors. Evaluated on the Ego4D-AVD validation set, the approach achieves mean Average Precision (mAP) scores of 70.2% and 66.7%, substantially outperforming single-paradigm baselines. These results validate the effectiveness and practicality of the dual-path collaborative architecture for robust first-person ASD.

Technology Category

Application Category

📝 Abstract
Audiovisual active speaker detection (ASD) in egocentric recordings is challenged by frequent occlusions, motion blur, and audio interference, which undermine the discernability of temporal synchrony between lip movement and speech. Traditional synchronisation-based systems perform well under clean conditions but degrade sharply in first-person recordings. Conversely, face-voice association (FVA)-based methods forgo synchronisation modelling in favour of cross-modal biometric matching, exhibiting robustness to transient visual corruption but suffering when overlapping speech or front-end segmentation errors occur. In this paper, a simple yet effective ensemble approach is proposed to fuse synchronisation-dependent and synchronisation-agnostic model outputs via weighted averaging, thereby harnessing complementary cues without introducing complex fusion architectures. A refined preprocessing pipeline for the FVA-based component is also introduced to optimise ensemble integration. Experiments on the Ego4D-AVD validation set demonstrate that the ensemble attains 70.2% and 66.7% mean Average Precision (mAP) with TalkNet and Light-ASD backbones, respectively. A qualitative analysis stratified by face image quality and utterance masking prevalence further substantiates the complementary strengths of each component.
Problem

Research questions and friction points this paper is trying to address.

Improves active speaker detection in noisy egocentric videos
Combines synchrony and biometric cues for robust performance
Addresses occlusion and audio interference challenges in ASD
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble synchronisation and face-voice association models
Weighted averaging for complementary cue fusion
Refined preprocessing for FVA-based component