Prompt Injection Vulnerability of Consensus Generating Applications in Digital Democracy

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical vulnerability to prompt injection attacks in large language model (LLM)-driven consensus generation systems for digital democracy. Focusing on the preference aggregation stage, we propose the first four-dimensional taxonomy of prompt injection attacks. Empirical analysis—conducted on LLaMA-3.1-8B and GPT-4.1-Nano—reveals that critical prompts and explicit instructions more effectively manipulate ambiguous consensus, and that rational arguments exert stronger adversarial influence than emotive language. We further pioneer the application of Direct Preference Optimization (DPO) to enhance consensus model robustness against such attacks. Results show DPO significantly improves resilience, yet leaves detectable gaps in defending against highly ambiguous consensus scenarios. Collectively, this study delivers a empirically validated risk map and an actionable defense baseline for secure, democratic AI system design.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are gaining traction as a method to generate consensus statements and aggregate preferences in digital democracy experiments. Yet, LLMs may introduce critical vulnerabilities in these systems. Here, we explore the impact of prompt-injection attacks targeting consensus generating systems by introducing a four-dimensional taxonomy of attacks. We test these attacks using LLaMA 3.1 8B and Chat GPT 4.1 Nano finding the LLMs more vulnerable to criticism attacks -- attacks using disagreeable prompts -- and more effective at tilting ambiguous consensus statements. We also find evidence of more effective manipulation when using explicit imperatives and rational-sounding arguments compared to emotional language or fabricated statistics. To mitigate these vulnerabilities, we apply Direct Preference Optimization (DPO), an alignment method that fine-tunes LLMs to prefer unperturbed consensus statements. While DPO significantly improves robustness, it still offers limited protection against attacks targeting ambiguous consensus. These results advance our understanding of the vulnerability and robustness of consensus generating LLMs in digital democracy applications.
Problem

Research questions and friction points this paper is trying to address.

Explores prompt-injection attacks on consensus-generating LLMs in digital democracy
Tests vulnerability of LLMs to criticism attacks and ambiguous consensus manipulation
Evaluates DPO as a mitigation method with limited protection against attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Four-dimensional taxonomy for prompt-injection attacks
Direct Preference Optimization for robustness
Testing LLaMA and GPT models vulnerability
J
Jairo Gudiño-Rosero
Université de Toulouse; Center for Collective Learning, IAST, Toulouse School of Economics
C
Clément Contet
Université Toulouse Capitole; IRIT
Umberto Grandi
Umberto Grandi
Professor at Université Toulouse Capitole
Artificial IntelligenceComputational Social Choice
C
César A. Hidalgo
Center for Collective Learning, CIAS, Corvinus University of Budapest; AMBS, University of Manchester