The Language You Ask In: Language-Conditioned Ideological Divergence in LLM Analysis of Contested Political Documents

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals that large language models (LLMs) exhibit systematic ideological bias in multilingual political text analysis depending on the language of the prompt. By inputting semantically equivalent prompts in Russian and Ukrainian to the same LLM and analyzing its interpretations of identical Ukrainian civil society documents, the research—combining qualitative discourse analysis with comparative political text methods—provides the first empirical evidence that merely switching the prompt language can yield starkly opposing political stances. Specifically, Russian-language prompts tend to align with official Russian narratives and delegitimize civil society, whereas Ukrainian-language prompts resonate with Western liberal-democratic discourses and affirm its legitimacy. These findings underscore the critical role of prompt language in shaping LLMs’ ideological outputs and offer essential insights for mitigating bias in multilingual AI systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly deployed as analytical tools across multilingual contexts, yet their outputs may carry systematic biases conditioned by the language of the prompt. This study presents an experimental comparison of LLM-generated political analyses of a Ukrainian civil society document, using semantically equivalent prompts in Russian and Ukrainian. Despite identical source material and parallel query structures, the resulting analyses varied substantially in rhetorical positioning, ideological orientation, and interpretive conclusions. The Russian-language output echoed narratives common in Russian state discourse, characterizing civil society actors as illegitimate elites undermining democratic mandates. The Ukrainian-language output adopted vocabulary characteristic of Western liberal-democratic political science, treating the same actors as legitimate stakeholders within democratic contestation. These findings demonstrate that prompt language alone can produce systematically different ideological orientations from identical models analyzing identical content, with significant implications for AI deployment in polarized information environments, cross-lingual research applications, and the governance of AI systems in multilingual societies.
Problem

Research questions and friction points this paper is trying to address.

language-conditioned bias
ideological divergence
large language models
political document analysis
multilingual AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

language-conditioned bias
ideological divergence
large language models
cross-lingual analysis
prompt engineering
🔎 Similar Papers
No similar papers found.