🤖 AI Summary
Existing identity-conditioned diffusion models suffer from insufficient inter-class separability in face image generation, leading to identity ambiguity and degrading downstream face recognition performance. To address this, we propose NegFaceDiff—the first diffusion sampling method incorporating *negative identity conditions*, which explicitly optimizes inter-class separability and intra-class consistency via negative contextual guidance. Our approach reconstructs the identity feature space based on the Fisher Discriminant Ratio (FDR), enforcing discriminative structure in the latent representation. Experiments demonstrate that the FDR of generated faces improves from 2.427 to 5.687. Consequently, face recognition models trained on NegFaceDiff-synthesized data achieve significant gains over baselines on standard benchmarks—including LFW, CFP-FP, and AgeDB-30. This work pioneers the integration of negative conditioning into identity-conditioned diffusion modeling, establishing a novel paradigm for generating highly discriminative synthetic faces.
📝 Abstract
The use of synthetic data as an alternative to authentic datasets in face recognition (FR) development has gained significant attention, addressing privacy, ethical, and practical concerns associated with collecting and using authentic data. Recent state-of-the-art approaches have proposed identity-conditioned diffusion models to generate identity-consistent face images, facilitating their use in training FR models. However, these methods often lack explicit sampling mechanisms to enforce inter-class separability, leading to identity overlap in the generated data and, consequently, suboptimal FR performance. In this work, we introduce NegFaceDiff, a novel sampling method that incorporates negative conditions into the identity-conditioned diffusion process. NegFaceDiff enhances identity separation by leveraging negative conditions that explicitly guide the model away from unwanted features while preserving intra-class consistency. Extensive experiments demonstrate that NegFaceDiff significantly improves the identity consistency and separability of data generated by identity-conditioned diffusion models. Specifically, identity separability, measured by the Fisher Discriminant Ratio (FDR), increases from 2.427 to 5.687. These improvements are reflected in FR systems trained on the NegFaceDiff dataset, which outperform models trained on data generated without negative conditions across multiple benchmarks.