Sense and Sensibility: What makes a social robot convincing to high-school students?

📅 2025-06-14
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how social robots’ multimodal certainty expressions—semantic, prosodic, and facial—impact high school students’ circuit knowledge judgments and conformity behavior in educational settings. Method: A controlled multimodal human-robot interaction experiment was conducted with adolescents. Contribution/Results: Results reveal adolescents’ heightened sensitivity to AI robots’ certainty levels: answer consistency reached 94.4% under “certain” robot expressions versus 71.4% under “uncertain” ones. However, overconfident robot expressions increased erroneous persuasion rates—particularly among students with prior large language model (LLM) experience. The study identifies a “reliability–certainty mismatch” risk: when expressed certainty exceeds actual reliability, trust and learning outcomes deteriorate. To address this, we propose a “reliability–certainty alignment” design principle. These findings provide empirical evidence and methodological guidance for human-centered design of trustworthy educational robots, advancing the integration of affective and cognitive cues in pedagogical AI systems.

Technology Category

Application Category

📝 Abstract
This study with 40 high-school students demonstrates the high influence of a social educational robot on students' decision-making for a set of eight true-false questions on electric circuits, for which the theory had been covered in the students' courses. The robot argued for the correct answer on six questions and the wrong on two, and 75% of the students were persuaded by the robot to perform beyond their expected capacity, positively when the robot was correct and negatively when it was wrong. Students with more experience of using large language models were even more likely to be influenced by the robot's stance -- in particular for the two easiest questions on which the robot was wrong -- suggesting that familiarity with AI can increase susceptibility to misinformation by AI. We further examined how three different levels of portrayed robot certainty, displayed using semantics, prosody and facial signals, affected how the students aligned with the robot's answer on specific questions and how convincing they perceived the robot to be on these questions. The students aligned with the robot's answers in 94.4% of the cases when the robot was portrayed as Certain, 82.6% when it was Neutral and 71.4% when it was Uncertain. The alignment was thus high for all conditions, highlighting students' general susceptibility to accept the robot's stance, but alignment in the Uncertain condition was significantly lower than in the Certain. Post-test questionnaire answers further show that students found the robot most convincing when it was portrayed as Certain. These findings highlight the need for educational robots to adjust their display of certainty based on the reliability of the information they convey, to promote students' critical thinking and reduce undue influence.
Problem

Research questions and friction points this paper is trying to address.

How social robots influence high-school students' decision-making
Impact of robot certainty levels on student alignment
AI familiarity increases susceptibility to robot misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Social robot influences student decision-making effectively
Robot certainty levels affect student alignment significantly
Educational robots adjust certainty to reduce undue influence
🔎 Similar Papers
No similar papers found.