Artificial Intelligence in Government: Why People Feel They Lose Control

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the causes and mechanisms underlying the erosion of public sense of control in governmental AI applications. Drawing on Principal-Agent Theory (PAT), it identifies a tripartite structural tension framework—evaluability, dependency, and contestability—that exposes democratic legitimacy risks as AI evolves from a tool to an autonomous agent: short-term efficiency gains may obscure long-term trust degradation, producing a “success-in-failure” paradox. Method: The study pioneers the systematic application of PAT to government AI governance, employing a pre-registered factorial experiment, cross-domain scenario designs (taxation, welfare, law enforcement), and structural equation modeling. Contribution/Results: Findings reveal that while initial AI performance improvements enhance institutional trust, they concurrently diminish citizens’ perceived control. When structural tensions intensify, both trust and control perceptions decline simultaneously—demonstrating that principal-agent risks exert dominant influence over public attitudes. This work advances theoretical and empirical understanding of AI’s democratic implications in public administration.

Technology Category

Application Category

📝 Abstract
The use of Artificial Intelligence (AI) in public administration is expanding rapidly, moving from automating routine tasks to deploying generative and agentic systems that autonomously act on goals. While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability. This article applies principal-agent theory (PAT) to conceptualize AI adoption as a special case of delegation, highlighting three core tensions: assessability (can decisions be understood?), dependency (can the delegation be reversed?), and contestability (can decisions be challenged?). These structural challenges may lead to a"failure-by-success"dynamic, where early functional gains obscure long-term risks to democratic legitimacy. To test this framework, we conducted a pre-registered factorial survey experiment across tax, welfare, and law enforcement domains. Our findings show that although efficiency gains initially bolster trust, they simultaneously reduce citizens' perceived control. When the structural risks come to the foreground, institutional trust and perceived control both drop sharply, suggesting that hidden costs of AI adoption significantly shape public attitudes. The study demonstrates that PAT offers a powerful lens for understanding the institutional and political implications of AI in government, emphasizing the need for policymakers to address delegation risks transparently to maintain public trust.
Problem

Research questions and friction points this paper is trying to address.

AI in government raises fairness, transparency, accountability concerns
AI adoption creates tensions in assessability, dependency, contestability
Efficiency gains reduce perceived citizen control, risking democratic legitimacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Applies principal-agent theory to AI delegation
Uses factorial survey to test public trust
Highlights assessability, dependency, contestability tensions
🔎 Similar Papers
No similar papers found.