🤖 AI Summary
This paper examines core ethical risks—particularly concerning memory authenticity, informed consent, dignity preservation, and harm prevention—arising from generative AI–driven digital avatars of Holocaust survivors in memorial transmission and education. Methodologically, it employs ethical analysis, historical scholarship, interdisciplinary normative reasoning, and socio-technical governance design. The study introduces the “Minimum Viable Permissibility Principle” (MVPP) framework: the first systematic integration of five dimensions—authentic presence, effective consent, positive value orientation, transparency, and risk mitigation—to assess the ethical acceptability of AI avatars within historical trauma contexts. It identifies multi-layered risks to survivor dignity, user cognition, and developer accountability, and proposes actionable pathways for technical governance and institutional coordination. The work establishes the first comprehensive ethical benchmark for AI applications involving sensitive historical memory.
📝 Abstract
Advances in generative artificial intelligence (AI) have driven a growing effort to create digital duplicates. These semi-autonomous recreations of living and dead people can be used for many purposes. Some of these purposes include tutoring, coping with grief, and attending business meetings. However, the normative implications of digital duplicates remain obscure, particularly considering the possibility of them being applied to genocide memory and education. To address this gap, we examine normative possibilities and risks associated with the use of more advanced forms of generative AI-enhanced duplicates for transmitting Holocaust survivor testimonies. We first review the historical and contemporary uses of survivor testimonies. Then, we scrutinize the possible benefits of using digital duplicates in this context and apply the Minimally Viable Permissibility Principle (MVPP). The MVPP is an analytical framework for evaluating the risks of digital duplicates. It includes five core components: the need for authentic presence, consent, positive value, transparency, and harm-risk mitigation. Using MVPP, we identify potential harms digital duplicates might pose to different actors, including survivors, users, and developers. We also propose technical and socio-technical mitigation strategies to address these harms.