🤖 AI Summary
This study systematically examines the risks of malicious exploitation of generative AI (GenAI) and large language models (LLMs) for online electoral interference. It focuses on four prototypical attack vectors: deepfakes, automated botnets, targeted disinformation campaigns, and synthetic identity construction. Methodologically, the work introduces the first threat taxonomy for GenAI-enabled election interference, integrating LLM capability analysis, socio-technical systems assessment, real-world incident forensics, and cross-platform information diffusion modeling. The analysis identifies six distinct attack patterns and three systemic institutional vulnerabilities. As a key contribution, the paper proposes a tiered response framework that has informed AI election-security initiatives by multiple national regulatory bodies. It thus advances both theoretical understanding of democratic resilience under AI-driven threats and actionable policy pathways for safeguarding electoral integrity.
📝 Abstract
Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) pose significant risks, particularly in the realm of online election interference. This paper explores the nefarious applications of GenAI, highlighting their potential to disrupt democratic processes through deepfakes, botnets, targeted misinformation campaigns, and synthetic identities. By examining recent case studies and public incidents, we illustrate how malicious actors exploit these technologies to try influencing voter behavior, spread disinformation, and undermine public trust in electoral systems. The paper also discusses the societal implications of these threats, emphasizing the urgent need for robust mitigation strategies and international cooperation to safeguard democratic integrity.