🤖 AI Summary
This study examines public attitudes toward artificial intelligence (AI) applications in elections—particularly deceptive uses—and their effects on party evaluations and regulatory preferences. Drawing on a pre-registered, nationally representative survey and two experiments (N = 7,635), we develop a tripartite framework categorizing AI election uses into campaign operations, voter outreach, and deceptive practices. We document, for the first time, that the public strongly opposes deceptive AI election practices. While such practices do not significantly reduce affective favorability toward the offending party (p > 0.1), they substantially increase support for a comprehensive ban on AI in elections (+12.3 percentage points, p < 0.001). These findings reveal a regulatory incentive misalignment: although the public favors stringent regulation, political parties face insufficient reputational costs for deploying deceptive AI, exposing institutional gaps. The study provides critical empirical foundations and policy-relevant insights for governing AI in electoral contexts.
📝 Abstract
All over the world, political parties, politicians, and campaigns explore how Artificial Intelligence (AI) can help them win elections. However, the effects of these activities are unknown. We propose a framework for assessing AI's impact on elections by considering its application in various campaigning tasks. The electoral uses of AI vary widely, carrying different levels of concern and need for regulatory oversight. To account for this diversity, we group AI-enabled campaigning uses into three categories -- campaign operations, voter outreach, and deception. Using this framework, we provide the first systematic evidence from a preregistered representative survey and two preregistered experiments (n=7,635) on how Americans think about AI in elections and the effects of specific campaigning choices. We provide three significant findings. 1) the public distinguishes between different AI uses in elections, seeing AI uses predominantly negative but objecting most strongly to deceptive uses; 2) deceptive AI practices can have adverse effects on relevant attitudes and strengthen public support for stopping AI development; 3) Although deceptive electoral uses of AI are intensely disliked, they do not result in substantial favorability penalties for the parties involved. There is a misalignment of incentives for deceptive practices and their externalities. We cannot count on public opinion to provide strong enough incentives for parties to forgo tactical advantages from AI-enabled deception. There is a need for regulatory oversight and systematic outside monitoring of electoral uses of AI. Still, regulators should account for the diversity of AI uses and not completely disincentivize their electoral use.