🤖 AI Summary
Blind face restoration suffers from an imbalance between fidelity preservation and detail generation due to unknown degradations. Existing diffusion-based methods, constrained by fixed sampling steps and global guidance scales, struggle to adapt to spatially non-uniform degradations—often resulting in under-diffusion or over-diffusion. To address this, we propose a dynamic blur-aware diffusion framework: (1) a Gaussian blur magnitude map is constructed to quantify spatially varying degradation intensity; (2) a dynamic start-time selection mechanism and local adaptive guidance strength adjustment are introduced to replace rigid temporal scheduling and uniform guidance; and (3) closed-form guidance combined with a dynamic scaling adapter enables fine-grained process control on pre-trained diffusion models. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks in both quantitative metrics (e.g., PSNR, LPIPS) and qualitative assessment, with significant improvements in reconstruction quality, identity preservation, and robustness to diverse degradations.
📝 Abstract
Blind Face Restoration aims to recover high-fidelity, detail-rich facial images from unknown degraded inputs, presenting significant challenges in preserving both identity and detail. Pre-trained diffusion models have been increasingly used as image priors to generate fine details. Still, existing methods often use fixed diffusion sampling timesteps and a global guidance scale, assuming uniform degradation. This limitation and potentially imperfect degradation kernel estimation frequently lead to under- or over-diffusion, resulting in an imbalance between fidelity and quality. We propose DynFaceRestore, a novel blind face restoration approach that learns to map any blindly degraded input to Gaussian blurry images. By leveraging these blurry images and their respective Gaussian kernels, we dynamically select the starting timesteps for each blurry image and apply closed-form guidance during the diffusion sampling process to maintain fidelity. Additionally, we introduce a dynamic guidance scaling adjuster that modulates the guidance strength across local regions, enhancing detail generation in complex areas while preserving structural fidelity in contours. This strategy effectively balances the trade-off between fidelity and quality. DynFaceRestore achieves state-of-the-art performance in both quantitative and qualitative evaluations, demonstrating robustness and effectiveness in blind face restoration.