Conditional Fairness for Generative AIs

📅 2024-04-25
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Generative AI exhibits context-conditioned fairness gaps—e.g., disparate representation quality for “low-income individuals” versus “business leaders”—posing critical challenges for equitable deployment across sociocultural contexts. Method: We propose the first two-tier conditional fairness framework tailored to generative models. First, we formalize context-aware conditional fairness criteria. Second, we introduce a worst-case distance metric and an intrinsic fairness evaluation mechanism under neutral prompts. Third, we design combinatorial testing to assess intersectional fairness and develop a proxy-driven, lightweight prompt injection strategy. Results: Extensive experiments across multiple state-of-the-art text-to-image models demonstrate that our approach achieves significant suppression of group-specific representational bias with minimal intervention, substantially enhancing contextual fairness assurance. The framework offers a verifiable, deployable paradigm for fairness research in generative AI—bridging theoretical rigor with practical applicability.

Technology Category

Application Category

📝 Abstract
The deployment of generative AI (GenAI) models raises significant fairness concerns, addressed in this paper through novel characterization and enforcement techniques specific to GenAI. Unlike standard AI performing specific tasks, GenAI's broad functionality requires"conditional fairness"tailored to the context being generated, such as demographic fairness in generating images of poor people versus successful business leaders. We define two fairness levels: the first evaluates fairness in generated outputs, independent of prompts and models; the second assesses inherent fairness with neutral prompts. Given the complexity of GenAI and challenges in fairness specifications, we focus on bounding the worst case, considering a GenAI system unfair if the distance between appearances of a specific group exceeds preset thresholds. We also explore combinatorial testing for accessing relative completeness in intersectional fairness. By bounding the worst case, we develop a prompt injection scheme within an agent-based framework to enforce conditional fairness with minimal intervention, validated on state-of-the-art GenAI systems.
Problem

Research questions and friction points this paper is trying to address.

Enforcing conditional fairness in generative AI outputs
Defining and measuring two levels of GenAI fairness
Developing intervention techniques to bound unfairness thresholds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defines conditional fairness for GenAI outputs
Uses worst-case bounding for fairness enforcement
Implements prompt injection for minimal intervention
🔎 Similar Papers
No similar papers found.