🤖 AI Summary
This paper addresses the dual challenges of low detectability of political propaganda in news discourse and users’ underdeveloped critical thinking. To tackle these, it proposes a counterintuitive design paradigm that reframes large language models’ (LLMs) inherent political biases—not as flaws to be mitigated—but as controllable cognitive intervention resources. Methodologically, it integrates confirmation bias and cognitive dissonance theories to develop a stance-aware propaganda detection tool. The system models user stances, triggers bias awareness, delivers personalized feedback, and incrementally exposes users to pluralistic perspectives—embodying a “bias-as-interface” interaction design. Its key contribution lies in being the first to reconceptualize AI bias as a designable cognitive lever. Validated through qualitative human-AI collaborative studies, the approach significantly enhances users’ depth of propaganda recognition and willingness to reflect, thereby advancing a paradigm shift toward human-AI co-facilitated critical thinking development.
📝 Abstract
This paper explores the design of a propaganda detection tool using Large Language Models (LLMs). Acknowledging the inherent biases in AI models, especially in political contexts, we investigate how these biases might be leveraged to enhance critical thinking in news consumption. Countering the typical view of AI biases as detrimental, our research proposes strategies of user choice and personalization in response to a user's political stance, applying psychological concepts of confirmation bias and cognitive dissonance. We present findings from a qualitative user study, offering insights and design recommendations (bias awareness, personalization and choice, and gradual introduction of diverse perspectives) for AI tools in propaganda detection.