🤖 AI Summary
This study investigates how generative AI models produce biased outputs under conditions of extreme uncertainty—specifically, when critical contextual information is severely absent. Method: We introduce a “contextual void” experimental paradigm, feeding only textual resumes to GPT-4 and DALL·E in tandem to generate profile portraits—thereby exposing how models rely on stereotypical associations and implicit assumptions for visual translation in the absence of explicit physical descriptions. Our approach innovatively employs cross-domain analogical reasoning to construct controlled, replicable voids, elevating speculative design into a critical research methodology. Using qualitative content analysis grounded in critical technical practice, we systematically identify systemic stereotyping and severe hallucinations across gender, race, and occupational dimensions. Contribution/Results: We demonstrate that contextual voids are not neutral inputs but active triggers that activate latent, value-laden biases embedded in model architectures—providing a reproducible, scalable methodological framework for bias detection and critical AI auditing.
📝 Abstract
In this paper, we introduce a speculative design methodology for studying the behavior of generative AI systems, framing design as a mode of inquiry. We propose bridging seemingly unrelated domains to generate intentional context voids, using these tasks as probes to elicit AI model behavior. We demonstrate this through a case study: probing the ChatGPT system (GPT-4 and DALL-E) to generate headshots from professional Curricula Vitae (CVs). In contrast to traditional ways, our approach assesses system behavior under conditions of radical uncertainty -- when forced to invent entire swaths of missing context -- revealing subtle stereotypes and value-laden assumptions. We qualitatively analyze how the system interprets identity and competence markers from CVs, translating them into visual portraits despite the missing context (i.e. physical descriptors). We show that within this context void, the AI system generates biased representations, potentially relying on stereotypical associations or blatant hallucinations.