🤖 AI Summary
Current streaming speech translation (StreamST) systems predominantly rely on pre-segmented speech from external segmentation models, resulting in limited contextual modeling and suboptimal policy decision-making. This work proposes a unified Large Speech-Language Model (LSLM) that jointly performs speech reception, dynamic segmentation, low-latency policy decisions, and translation generation in an end-to-end manner—eliminating the need for external speech segmentation modules. Our core innovation is the introduction of Speech Chain-of-Thought (Speech-CoT) and a streaming Chain-of-Thought training paradigm, enabling multi-stage implicit reasoning to jointly optimize segmentation, policy learning, and translation within a single architecture. This design substantially enhances contextual modeling capacity and policy generalization, while enabling efficient training with minimal policy supervision. On standard StreamST benchmarks, our approach achieves state-of-the-art translation quality at significantly lower latency.
📝 Abstract
Streaming speech translation (StreamST) requires determining appropriate timing, known as policy, to generate translations while continuously receiving source speech inputs, balancing low latency with high translation quality. However, existing StreamST methods typically operate on sentence-level speech segments, referred to as simultaneous speech translation (SimulST). In practice, they require collaboration with segmentation models to accomplish StreamST, where the truncated speech segments constrain SimulST models to make policy decisions and generate translations based on limited contextual information. Moreover, SimulST models struggle to learn effective policies due to the complexity of speech inputs and cross-lingual generation. To address these challenges, we propose StreamUni, which achieves StreamST through a unified Large Speech-Language Model (LSLM). Specifically, StreamUni incorporates speech Chain-of-Thought (CoT) in guiding the LSLM to generate multi-stage outputs. Leveraging these multi-stage outputs, StreamUni simultaneously accomplishes speech segmentation, policy decision, and translation generation, completing StreamST without requiring massive policy-specific training. Additionally, we propose a streaming CoT training method that enhances low-latency policy decisions and generation capabilities using limited CoT data. Experiments demonstrate that our approach achieves state-of-the-art performance on StreamST tasks.