🤖 AI Summary
This work identifies a critical vulnerability to prompt injection attacks in large language model (LLM)-driven consensus generation systems for digital democracy. Focusing on the preference aggregation stage, we propose the first four-dimensional taxonomy of prompt injection attacks. Empirical analysis—conducted on LLaMA-3.1-8B and GPT-4.1-Nano—reveals that critical prompts and explicit instructions more effectively manipulate ambiguous consensus, and that rational arguments exert stronger adversarial influence than emotive language. We further pioneer the application of Direct Preference Optimization (DPO) to enhance consensus model robustness against such attacks. Results show DPO significantly improves resilience, yet leaves detectable gaps in defending against highly ambiguous consensus scenarios. Collectively, this study delivers a empirically validated risk map and an actionable defense baseline for secure, democratic AI system design.
📝 Abstract
Large Language Models (LLMs) are gaining traction as a method to generate consensus statements and aggregate preferences in digital democracy experiments. Yet, LLMs may introduce critical vulnerabilities in these systems. Here, we explore the impact of prompt-injection attacks targeting consensus generating systems by introducing a four-dimensional taxonomy of attacks. We test these attacks using LLaMA 3.1 8B and Chat GPT 4.1 Nano finding the LLMs more vulnerable to criticism attacks -- attacks using disagreeable prompts -- and more effective at tilting ambiguous consensus statements. We also find evidence of more effective manipulation when using explicit imperatives and rational-sounding arguments compared to emotional language or fabricated statistics. To mitigate these vulnerabilities, we apply Direct Preference Optimization (DPO), an alignment method that fine-tunes LLMs to prefer unperturbed consensus statements. While DPO significantly improves robustness, it still offers limited protection against attacks targeting ambiguous consensus. These results advance our understanding of the vulnerability and robustness of consensus generating LLMs in digital democracy applications.