Learning Through Dialogue: Unpacking the Dynamics of Human-LLM Conversations on Political Issues

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) acting as conversational partners influence users’ knowledge acquisition and confidence regarding political issues through interactive dialogue, with a focus on the mediating role of interaction dynamics and the moderating effect of individual differences. Analyzing 397 human–LLM socio-political dialogues, the research integrates linguistic feature coding with quantitative measures of cognitive engagement and reflective insight, employing mediation and moderation analyses. Findings reveal that learning outcomes are not solely determined by the quality of LLM explanations but emerge from collaborative human–AI interaction: explanatory richness enhances user confidence by fostering reflective insight and promotes knowledge gains by strengthening cognitive engagement. These effects are significantly moderated by political efficacy, with high-efficacy users deriving greater benefits from extended dialogues. The results underscore the need to dynamically tailor LLM explanation strategies to users’ evolving engagement states.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used as conversational partners for learning, yet the interactional dynamics supporting users'learning and engagement are understudied. We analyze the linguistic and interactional features from both LLM and participant chats across 397 human-LLM conversations about socio-political issues to identify the mechanisms and conditions under which LLM explanations shape changes in political knowledge and confidence. Mediation analyses reveal that LLM explanatory richness partially supports confidence by fostering users'reflective insight, whereas its effect on knowledge gain operates entirely through users'cognitive engagement. Moderation analyses show that these effects are highly conditional and vary by political efficacy. Confidence gains depend on how high-efficacy users experience and resolve uncertainty. Knowledge gains depend on high-efficacy users'ability to leverage extended interaction, with longer conversations benefiting primarily reflective users. In summary, we find that learning from LLMs is an interactional achievement, not a uniform outcome of better explanations. The findings underscore the importance of aligning LLM explanatory behavior with users'engagement states to support effective learning in designing Human-AI interactive systems.
Problem

Research questions and friction points this paper is trying to address.

human-LLM interaction
political learning
explanatory dynamics
cognitive engagement
political efficacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

human-LLM interaction
explanatory richness
cognitive engagement
political efficacy
reflective insight
🔎 Similar Papers
No similar papers found.