🤖 AI Summary
To address the mismatch between large language models’ (LLMs) reasoning capabilities and natural spoken-language expression in conversational speech interfaces, this paper proposes the Think-Verbalize-Speak (TVS) framework—the first to decouple reasoning from spoken utterance generation. Its core innovation is the introduction of an intermediate “verbalization” stage, implemented via the low-latency ReVerT model. ReVerT’s verbalizer employs incremental and asynchronous summarization to efficiently transform chain-of-thought reasoning into natural, concise spoken language. Evaluated across multiple spoken dialogue benchmarks, TVS achieves significant improvements in speech naturalness (+18.3% Mean Opinion Score) and conciseness (−32% redundant words), while preserving 99.6% of the original reasoning accuracy. By explicitly separating reasoning from phonetic realization, TVS establishes a novel paradigm for building LLM-based dialogue systems that simultaneously exhibit robust logical inference and human-like spoken interaction.
📝 Abstract
Spoken dialogue systems increasingly employ large language models (LLMs) to leverage their advanced reasoning capabilities. However, direct application of LLMs in spoken communication often yield suboptimal results due to mismatches between optimal textual and verbal delivery. While existing approaches adapt LLMs to produce speech-friendly outputs, their impact on reasoning performance remains underexplored. In this work, we propose Think-Verbalize-Speak, a framework that decouples reasoning from spoken delivery to preserve the full reasoning capacity of LLMs. Central to our method is verbalizing, an intermediate step that translates thoughts into natural, speech-ready text. We also introduce ReVerT, a latency-efficient verbalizer based on incremental and asynchronous summarization. Experiments across multiple benchmarks show that our method enhances speech naturalness and conciseness with minimal impact on reasoning. The project page with the dataset and the source code is available at https://yhytoto12.github.io/TVS-ReVerT