Fine-Grained Interpretation of Political Opinions in Large Language Models

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies an inconsistency between the internal political intent of large language models (LLMs) and their open-ended generative outputs, highlighting that unidimensional political modeling risks conceptual conflation. Method: We propose the first four-dimensional decoupled political conceptual framework—spanning economic/social, liberal/conservative, egalitarian/authoritarian, and local/global axes—and develop a corresponding annotated dataset and interpretable representation engineering methodology. Our approach moves beyond response-level analysis by leveraging internal activation probing and vector-space intervention. Contribution/Results: Validated across eight open-source LLMs, our method achieves effective dimensional decoupling; detection tasks demonstrate high semantic consistency and out-of-distribution robustness; intervention experiments enable targeted steering of generated text along specific political dimensions—providing an interpretable, actionable technical pathway for value alignment in LLMs.

Technology Category

Application Category

📝 Abstract
Studies of LLMs' political opinions mainly rely on evaluations of their open-ended responses. Recent work indicates that there is a misalignment between LLMs' responses and their internal intentions. This motivates us to probe LLMs' internal mechanisms and help uncover their internal political states. Additionally, we found that the analysis of LLMs' political opinions often relies on single-axis concepts, which can lead to concept confounds. In this work, we extend the single-axis to multi-dimensions and apply interpretable representation engineering techniques for more transparent LLM political concept learning. Specifically, we designed a four-dimensional political learning framework and constructed a corresponding dataset for fine-grained political concept vector learning. These vectors can be used to detect and intervene in LLM internals. Experiments are conducted on eight open-source LLMs with three representation engineering techniques. Results show these vectors can disentangle political concept confounds. Detection tasks validate the semantic meaning of the vectors and show good generalization and robustness in OOD settings. Intervention Experiments show these vectors can intervene in LLMs to generate responses with different political leanings.
Problem

Research questions and friction points this paper is trying to address.

Probing LLMs' internal political intentions versus responses
Extending single-axis political analysis to multi-dimensions
Detecting and intervening in LLMs' political leanings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional political learning framework
Interpretable representation engineering techniques
Fine-grained political concept vector learning
🔎 Similar Papers
No similar papers found.