🤖 AI Summary
This paper identifies human oversight itself as a novel attack surface in AI safety: while serving as a critical safeguard against erroneous AI outputs, system failures, and rights violations, oversight mechanisms are vulnerable to exploitation via system vulnerabilities, communication hijacking, or social engineering—thereby undermining AI security, assurance, and accountability. Adopting a cybersecurity lens, we systematically classify three distinct threat vectors targeting (1) AI systems, (2) human-AI interaction channels, and (3) human supervisors. Integrating principles from cybersecurity theory and AI governance frameworks, we propose a comprehensive defense strategy. Central to this is the “oversight-as-infrastructure” paradigm, which formalizes risk taxonomy and mitigation pathways. Our work establishes both theoretical foundations and actionable guidelines for designing robust, attack-resilient human oversight mechanisms in AI systems. (149 words)
📝 Abstract
Human oversight of AI is promoted as a safeguard against risks such as inaccurate outputs, system malfunctions, or violations of fundamental rights, and is mandated in regulation like the European AI Act. Yet debates on human oversight have largely focused on its effectiveness, while overlooking a critical dimension: the security of human oversight. We argue that human oversight creates a new attack surface within the safety, security, and accountability architecture of AI operations. Drawing on cybersecurity perspectives, we analyze attack vectors that threaten the requirements of effective human oversight, thereby undermining the safety of AI operations. Such attacks may target the AI system, its communication with oversight personnel, or the personnel themselves. We then outline hardening strategies to mitigate these risks. Our contributions are: (1) introducing a security perspective on human oversight, and (2) providing an overview of attack vectors and hardening strategies to enable secure human oversight of AI.