🤖 AI Summary
This study exposes the “Malicious Technical Ecosystem” (MTE) underpinning AI-generated non-consensual intimate imagery (AIG-NCII)—comprising open-source deepfake models and nearly 200 “nudification” tools—revealing systemic failures in current AI governance regarding accountability attribution, intent inference, and response timeliness. Methodologically, it introduces the first analytical framework for MTEs, integrating digital anthropology, software archaeology, policy text analysis, and mapping to NIST AI Risk Management Framework (AI RMF) 100-4, adopting a survivor-centered critique of prevailing standards’ mischaracterizations of developer liability, user intent, and tool accessibility. Empirically, it identifies three structural deficiencies: (1) unjustified displacement of responsibility onto end users; (2) inherent unverifiability of malicious intent; and (3) severe regulatory lag behind the rapid proliferation of open-source tools. The findings provide both theoretical grounding and actionable pathways for reconceptualizing AI governance around rights protection rather than technical compliance.
📝 Abstract
In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, colloquially known as"deep fake pornography."We identify a"malicious technical ecosystem"or"MTE,"comprising of open-source face-swapping models and nearly 200"nudifying"software programs that allow non-technical users to create AIG-NCII within minutes. Then, using the National Institute of Standards and Technology (NIST) AI 100-4 report as a reflection of current synthetic content governance methods, we show how the current landscape of practices fails to effectively regulate the MTE for adult AIG-NCII, as well as flawed assumptions explaining these gaps.