StreamUni: Achieving Streaming Speech Translation with a Unified Large Speech-Language Model

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current streaming speech translation (StreamST) systems predominantly rely on pre-segmented speech from external segmentation models, resulting in limited contextual modeling and suboptimal policy decision-making. This work proposes a unified Large Speech-Language Model (LSLM) that jointly performs speech reception, dynamic segmentation, low-latency policy decisions, and translation generation in an end-to-end manner—eliminating the need for external speech segmentation modules. Our core innovation is the introduction of Speech Chain-of-Thought (Speech-CoT) and a streaming Chain-of-Thought training paradigm, enabling multi-stage implicit reasoning to jointly optimize segmentation, policy learning, and translation within a single architecture. This design substantially enhances contextual modeling capacity and policy generalization, while enabling efficient training with minimal policy supervision. On standard StreamST benchmarks, our approach achieves state-of-the-art translation quality at significantly lower latency.

Technology Category

Application Category

📝 Abstract
Streaming speech translation (StreamST) requires determining appropriate timing, known as policy, to generate translations while continuously receiving source speech inputs, balancing low latency with high translation quality. However, existing StreamST methods typically operate on sentence-level speech segments, referred to as simultaneous speech translation (SimulST). In practice, they require collaboration with segmentation models to accomplish StreamST, where the truncated speech segments constrain SimulST models to make policy decisions and generate translations based on limited contextual information. Moreover, SimulST models struggle to learn effective policies due to the complexity of speech inputs and cross-lingual generation. To address these challenges, we propose StreamUni, which achieves StreamST through a unified Large Speech-Language Model (LSLM). Specifically, StreamUni incorporates speech Chain-of-Thought (CoT) in guiding the LSLM to generate multi-stage outputs. Leveraging these multi-stage outputs, StreamUni simultaneously accomplishes speech segmentation, policy decision, and translation generation, completing StreamST without requiring massive policy-specific training. Additionally, we propose a streaming CoT training method that enhances low-latency policy decisions and generation capabilities using limited CoT data. Experiments demonstrate that our approach achieves state-of-the-art performance on StreamST tasks.
Problem

Research questions and friction points this paper is trying to address.

Balancing low latency and high quality in streaming speech translation
Overcoming limited context from sentence-level speech segments
Learning effective policies for complex speech and cross-lingual generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Large Speech-Language Model for StreamST
Speech Chain-of-Thought guides multi-stage outputs
Streaming CoT training enhances low-latency decisions
🔎 Similar Papers
No similar papers found.
S
Shoutao Guo
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS); University of Chinese Academy of Sciences, Beijing, China
X
Xiang Li
Li Auto
Shaolei Zhang
Shaolei Zhang
Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
Natural Language ProcessingLarge Language ModelMultimodal LLMsSimultaneous Translation
Mengge Liu
Mengge Liu
北京理工大学
Machine Translation
W
Wei Chen
Li Auto
Y
Yang Feng
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS); Key Laboratory of AI Safety, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing, China