🤖 AI Summary
Traditional automatic speech recognition (ASR) systems rely primarily on word error rate (WER) for evaluation, neglecting semantic correctness and lacking human-like interactive correction capabilities. This work proposes an agent-based interactive ASR framework that unifies semantic consistency evaluation and multi-turn interactive error correction within a large language model (LLM)-driven architecture. Specifically, it introduces LLM-as-a-Judge as a semantic-aware evaluator and leverages semantic feedback to iteratively refine recognition outputs. Experimental results on benchmarks including GigaSpeech, WenetSpeech, and ASRU 2019 demonstrate that the proposed approach significantly outperforms existing baselines in both semantic fidelity and interactive correction performance, achieving consistent improvements across both objective and subjective metrics.
📝 Abstract
Recent years have witnessed remarkable progress in automatic speech recognition (ASR), driven by advances in model architectures and large-scale training data. However, two important aspects remain underexplored. First, Word Error Rate (WER), the dominant evaluation metric for decades, treats all words equally and often fails to reflect the semantic correctness of an utterance at the sentence level. Second, interactive correction-an essential component of human communication-has rarely been systematically studied in ASR research. In this paper, we integrate these two perspectives under an agentic framework for interactive ASR. We propose leveraging LLM-as-a-Judge as a semantic-aware evaluation metric to assess recognition quality beyond token-level accuracy. Furthermore, we design an LLM-driven agent framework to simulate human-like multi-turn interaction, enabling iterative refinement of recognition outputs through semantic feedback. Extensive experiments are conducted on standard benchmarks, including GigaSpeech (English), WenetSpeech (Chinese), the ASRU 2019 code-switching test set. Both objective and subjective evaluations demonstrate the effectiveness of the proposed framework in improving semantic fidelity and interactive correction capability. We will release the code to facilitate future research in interactive and agentic ASR.