🤖 AI Summary
Existing spoken interaction systems struggle to simultaneously achieve low latency and semantic coherence, constrained either by the high latency of cascaded architectures or the limited reasoning capabilities of end-to-end models. This work proposes LTS-VoiceAgent, a novel framework that decouples “when to think” from “how to reason incrementally.” It introduces a dynamic semantic trigger to identify meaningful speech prefixes and employs a dual-role streaming coordination mechanism—comprising a background Thinker and a foreground Speaker—to enable concurrent listening, reasoning, and speaking. This design effectively mitigates semantic fragmentation and redundant computation, facilitating fluent, streaming interactions. Evaluated on VERA, Spoken-MQA, BigBenchAudio, and a newly curated Pause-and-Repair benchmark, LTS-VoiceAgent significantly outperforms existing cascaded and streaming approaches, achieving a superior trade-off among accuracy, latency, and computational efficiency.
📝 Abstract
Real-time voice agents face a dilemma: end-to-end models often lack deep reasoning, while cascaded pipelines incur high latency by executing ASR, LLM reasoning, and TTS strictly in sequence, unlike human conversation where listeners often start thinking before the speaker finishes. Since cascaded architectures remain the dominant choice for complex tasks, existing cascaded streaming strategies attempt to reduce this latency via mechanical segmentation (e.g., fixed chunks, VAD-based splitting) or speculative generation, but they frequently either break semantic units or waste computation on predictions that must be rolled back. To address these challenges, we propose LTS-VoiceAgent, a Listen-Think-Speak framework that explicitly separates when to think from how to reason incrementally. It features a Dynamic Semantic Trigger to detect meaningful prefixes, and a Dual-Role Stream Orchestrator that coordinates a background Thinker (for state maintenance) and a foreground Speaker (for speculative solving). This parallel design enables"thinking while speaking"without blocking responses. We also introduce a Pause-and-Repair benchmark containing natural disfluencies to stress-test streaming robustness. Experiments across VERA, Spoken-MQA, BigBenchAudio, and our benchmark show that LTS-VoiceAgent achieves a stronger accuracy-latency-efficiency trade-off than serial cascaded baselines and existing streaming strategies.