MFA-KWS: Effective Keyword Spotting with Multi-head Frame-asynchronous Decoding

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing ASR-based keyword spotting (KWS) methods lack explicit focus on keyword detection during search-space exploration, leading to a trade-off between accuracy and real-time performance. This paper proposes a streaming CTC-Transducer hybrid framework with multi-head frame-asynchronous decoding: (i) a novel keyword-customized phoneme-synchronous CTC decoder; (ii) a Token-and-Duration Transducer replacing RNN-T for improved alignment modeling; and (iii) a consistency-driven CDC-Last score fusion strategy. Evaluated on Snips and MobvoiHotwords, the method achieves state-of-the-art accuracy with strong noise robustness. It accelerates inference by 47–63%, significantly reducing edge-device latency. The core contribution lies in deeply embedding keyword detection objectives into the streaming decoding architecture—enabling joint optimization of accuracy, real-time responsiveness, and deployability on resource-constrained devices.

Technology Category

Application Category

📝 Abstract
Keyword spotting (KWS) is essential for voice-driven applications, demanding both accuracy and efficiency. Traditional ASR-based KWS methods, such as greedy and beam search, explore the entire search space without explicitly prioritizing keyword detection, often leading to suboptimal performance. In this paper, we propose an effective keyword-specific KWS framework by introducing a streaming-oriented CTC-Transducer-combined frame-asynchronous system with multi-head frame-asynchronous decoding (MFA-KWS). Specifically, MFA-KWS employs keyword-specific phone-synchronous decoding for CTC and replaces conventional RNN-T with Token-and-Duration Transducer to enhance both performance and efficiency. Furthermore, we explore various score fusion strategies, including single-frame-based and consistency-based methods. Extensive experiments demonstrate the superior performance of MFA-KWS, which achieves state-of-the-art results on both fixed keyword and arbitrary keywords datasets, such as Snips, MobvoiHotwords, and LibriKWS-20, while exhibiting strong robustness in noisy environments. Among fusion strategies, the consistency-based CDC-Last method delivers the best performance. Additionally, MFA-KWS achieves a 47% to 63% speed-up over the frame-synchronous baselines across various datasets. Extensive experimental results confirm that MFA-KWS is an effective and efficient KWS framework, making it well-suited for on-device deployment.
Problem

Research questions and friction points this paper is trying to address.

Improves keyword spotting accuracy and efficiency
Replaces traditional ASR methods with frame-asynchronous decoding
Enhances robustness in noisy environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-head frame-asynchronous decoding for KWS
Token-and-Duration Transducer replaces RNN-T
Consistency-based CDC-Last fusion strategy
🔎 Similar Papers
No similar papers found.
Y
Yu Xi
X-Lance Lab, Department of Computer Science and Engineering & MoE Key Lab of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China
H
Haoyu Li
X-Lance Lab, Department of Computer Science and Engineering & MoE Key Lab of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China
X
Xiaoyu Gu
Yidi Jiang
Yidi Jiang
Ph.D., National University of Singapore
MultimodalMachine LearningSpeech Processing
K
Kai Yu
X-Lance Lab, Department of Computer Science and Engineering & MoE Key Lab of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China