🤖 AI Summary
This work addresses the high computational cost of multi-step iterative inference in flow-matching text-to-speech (TTS) and the limitations of existing distillation approaches, which suffer from endpoint error accumulation in few-step or single-step generation and inefficient parameter usage due to continuous-time modeling. To overcome these issues, the authors propose DSFlow, a novel framework that reformulates the generative process as a discrete prediction task. DSFlow introduces a dual-supervision mechanism—combining endpoint matching with deterministic mean velocity alignment—and replaces conventional continuous-time conditioning with a lightweight step-aware embedding. This design significantly enhances training stability and parameter efficiency, enabling high-quality single-step and few-step speech synthesis across various flow-matching TTS architectures while reducing model size and inference cost.
📝 Abstract
Flow-matching models have enabled high-quality text-to-speech synthesis, but their iterative sampling process during inference incurs substantial computational cost. Although distillation is widely used to reduce the number of inference steps, existing methods often suffer from process variance due to endpoint error accumulation. Moreover, directly reusing continuous-time architectures for discrete, fixed-step generation introduces structural parameter inefficiencies. To address these challenges, we introduce DSFlow, a modular distillation framework for few-step and one-step synthesis. DSFlow reformulates generation as a discrete prediction task and explicitly adapts the student model to the target inference regime. It improves training stability through a dual supervision strategy that combines endpoint matching with deterministic mean-velocity alignment, enforcing consistent generation trajectories across inference steps. In addition, DSFlow improves parameter efficiency by replacing continuous-time timestep conditioning with lightweight step-aware tokens, aligning model capacity with the significantly reduced timestep space of the discrete task. Extensive experiments across diverse flow-based text-to-speech architectures demonstrate that DSFlow consistently outperforms standard distillation approaches, achieving strong few-step and one-step synthesis quality while reducing model parameters and inference cost.