🤖 AI Summary
Diffusion models often suffer from structural distortions and unrealistic details in localized regions—particularly hands and faces—due to insufficient supervision during human image generation. To address this, we propose a multi-objective fair fine-tuning framework. Our approach introduces (1) position priors derived from pre-annotated landmarks to formulate explicit local learning objectives for hands and faces, and (2) a fair optimization strategy guided by the Minimum Potential Discrepancy (MPD) principle, which balances parameter updates between global semantic loss and local detail loss. Crucially, our method requires no additional annotations or architectural modifications. Experiments demonstrate significant improvements in structural plausibility and textural fidelity of hands and faces, while preserving overall image quality and generation stability. The framework exhibits strong robustness and generalization across diverse poses and complex scenes.
📝 Abstract
Image generation has achieved remarkable progress with the development of large-scale text-to-image models, especially diffusion-based models. However, generating human images with plausible details, such as faces or hands, remains challenging due to insufficient supervision of local regions during training. To address this issue, we propose FairHuman, a multi-objective fine-tuning approach designed to enhance both global and local generation quality fairly. Specifically, we first construct three learning objectives: a global objective derived from the default diffusion objective function and two local objectives for hands and faces based on pre-annotated positional priors. Subsequently, we derive the optimal parameter updating strategy under the guidance of the Minimum Potential Delay (MPD) criterion, thereby attaining fairness-ware optimization for this multi-objective problem. Based on this, our proposed method can achieve significant improvements in generating challenging local details while maintaining overall quality. Extensive experiments showcase the effectiveness of our method in improving the performance of human image generation under different scenarios.