🤖 AI Summary
This paper addresses the speaker localization and tracking challenge arising from insufficient audio-visual modality complementarity. It presents a systematic review of audio-visual speaker tracking (AVST) research from 2018 to 2023. Methodologically, it proposes a unified taxonomy integrating Bayesian filtering (particle/Kalman filters), deep neural networks (CNNs, RNNs, Transformers), and multimodal synchronization modeling. As the first large-scale benchmarking effort on the AV16.3 dataset, it quantitatively evaluates over 12 state-of-the-art trackers in terms of accuracy and robustness. The analysis reveals a paradigm shift induced by deep learning in measurement extraction and state estimation, and clarifies cross-task connections with speech separation and distributed tracking. Key findings identify three critical research directions: low-latency multimodal fusion, weakly supervised learning, and cross-scenario generalization—offering concrete guidance for advancing AVST systems.
📝 Abstract
Audio-visual speaker tracking has drawn increasing attention over the past few years due to its academic values and wide applications. Audio and visual modalities can provide complementary information for localization and tracking. With audio and visual information, the Bayesian-based filter and deep learning-based methods can solve the problem of data association, audio-visual fusion and track management. In this paper, we conduct a comprehensive overview of audio-visual speaker tracking. To our knowledge, this is the first extensive survey over the past five years. We introduce the family of Bayesian filters and summarize the methods for obtaining audio-visual measurements. In addition, the existing trackers and their performance on the AV16.3 dataset are summarized. In the past few years, deep learning techniques have thrived, which also boost the development of audio-visual speaker tracking. The influence of deep learning techniques in terms of measurement extraction and state estimation is also discussed. Finally, we discuss the connections between audio-visual speaker tracking and other areas such as speech separation and distributed speaker tracking.