🤖 AI Summary
Existing caching strategies focus on minimizing local heuristic errors while neglecting global error accumulation, leading to severe content degradation in accelerated video generation. This paper proposes LeMiCa—a training-free acceleration framework for diffusion-based video generation—that formulates cache scheduling as a path optimization problem on an error-weighted directed graph. It introduces a lexicographic min–max strategy to rigorously bound worst-case cumulative error, ensuring global content consistency. LeMiCa jointly optimizes inter-frame cache reuse and error-aware scheduling, with quantitative evaluation using perceptual metrics such as LPIPS. On Latte, it achieves 2.9× speedup; on Open-Sora, it attains an LPIPS of 0.05—substantially outperforming baselines—while preserving visual quality with negligible perceptible degradation.
📝 Abstract
We present LeMiCa, a training-free and efficient acceleration framework for diffusion-based video generation. While existing caching strategies primarily focus on reducing local heuristic errors, they often overlook the accumulation of global errors, leading to noticeable content degradation between accelerated and original videos. To address this issue, we formulate cache scheduling as a directed graph with error-weighted edges and introduce a Lexicographic Minimax Path Optimization strategy that explicitly bounds the worst-case path error. This approach substantially improves the consistency of global content and style across generated frames. Extensive experiments on multiple text-to-video benchmarks demonstrate that LeMiCa delivers dual improvements in both inference speed and generation quality. Notably, our method achieves a 2.9x speedup on the Latte model and reaches an LPIPS score of 0.05 on Open-Sora, outperforming prior caching techniques. Importantly, these gains come with minimal perceptual quality degradation, making LeMiCa a robust and generalizable paradigm for accelerating diffusion-based video generation. We believe this approach can serve as a strong foundation for future research on efficient and reliable video synthesis. Our code is available at :https://github.com/UnicomAI/LeMiCa