🤖 AI Summary
This work addresses the efficiency bottleneck in diffusion model inference caused by high energy consumption and latency. Conventional dynamic voltage and frequency scaling (DVFS) approaches fail to exploit the inherent error resilience of diffusion models, often compromising either energy efficiency or output quality. To overcome this limitation, the authors propose DRIFT, a framework that systematically characterizes the fault tolerance of diffusion models and introduces a fine-grained, sensitivity-aware DVFS strategy with selective protection of critical modules and timesteps. DRIFT further incorporates a lightweight rollback-based adaptive error correction mechanism and memory optimizations. Evaluated across diverse models and datasets, DRIFT reduces energy consumption by 36% on average or achieves a 1.7× speedup while maintaining generation quality with no significant degradation.
📝 Abstract
Diffusion model deployment has been suffering from high energy consumption and inference latency despite its superior performance in visual generation tasks. Dynamic voltage and frequency scaling (DVFS) offers a promising solution to exploit the potential of the underlying accelerators. However, existing approaches often lead to either limited efficiency gains or degraded output quality because they overlook the inherent fault tolerance of the diffusion model. Therefore, in this paper, we propose DRIFT, a novel algorithmarchitecture co-optimization framework that harnesses the fault tolerance for efficient and reliable diffusion model inference. We first perform a comprehensive resilience analysis on representative diffusion models. Building on these observations, we introduce a fine-grained, resilience-aware DVFS strategy that selectively protects error-sensitive network blocks and timesteps, and a rollback algorithm-based fault tolerance (ABFT) mechanism that adaptively corrects only critical errors by reverting to previous timesteps. We further optimize offloading intervals and reorganize data layouts to reduce memory overhead. Experiments across diverse models and datasets show that DRIFT can achieve on average 36% energy savings through voltage underscaling or 1.7x speedup via overclocking while maintaining generation quality.