DSFlow: Dual Supervision and Step-Aware Architecture for One-Step Flow Matching Speech Synthesis

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of multi-step iterative inference in flow-matching text-to-speech (TTS) and the limitations of existing distillation approaches, which suffer from endpoint error accumulation in few-step or single-step generation and inefficient parameter usage due to continuous-time modeling. To overcome these issues, the authors propose DSFlow, a novel framework that reformulates the generative process as a discrete prediction task. DSFlow introduces a dual-supervision mechanism—combining endpoint matching with deterministic mean velocity alignment—and replaces conventional continuous-time conditioning with a lightweight step-aware embedding. This design significantly enhances training stability and parameter efficiency, enabling high-quality single-step and few-step speech synthesis across various flow-matching TTS architectures while reducing model size and inference cost.

Technology Category

Application Category

📝 Abstract
Flow-matching models have enabled high-quality text-to-speech synthesis, but their iterative sampling process during inference incurs substantial computational cost. Although distillation is widely used to reduce the number of inference steps, existing methods often suffer from process variance due to endpoint error accumulation. Moreover, directly reusing continuous-time architectures for discrete, fixed-step generation introduces structural parameter inefficiencies. To address these challenges, we introduce DSFlow, a modular distillation framework for few-step and one-step synthesis. DSFlow reformulates generation as a discrete prediction task and explicitly adapts the student model to the target inference regime. It improves training stability through a dual supervision strategy that combines endpoint matching with deterministic mean-velocity alignment, enforcing consistent generation trajectories across inference steps. In addition, DSFlow improves parameter efficiency by replacing continuous-time timestep conditioning with lightweight step-aware tokens, aligning model capacity with the significantly reduced timestep space of the discrete task. Extensive experiments across diverse flow-based text-to-speech architectures demonstrate that DSFlow consistently outperforms standard distillation approaches, achieving strong few-step and one-step synthesis quality while reducing model parameters and inference cost.
Problem

Research questions and friction points this paper is trying to address.

flow matching
text-to-speech synthesis
distillation
inference efficiency
step reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

flow matching
distillation
step-aware architecture
dual supervision
one-step synthesis
🔎 Similar Papers
No similar papers found.
B
Bin Lin
P
Peng Yang
Chao Yan
Chao Yan
Instructor at DBMI, VUMC; CS PhD from Vanderbilt U
AI for medicineSynthetic health dataPrivacyFairness
X
Xiaochen Liu
W
Wei Wang
B
Boyong Wu
P
Pengfei Tan
X
Xuerui Yang