đ¤ AI Summary
This study investigates how social robotsâ multimodal certainty expressionsâsemantic, prosodic, and facialâimpact high school studentsâ circuit knowledge judgments and conformity behavior in educational settings. Method: A controlled multimodal human-robot interaction experiment was conducted with adolescents. Contribution/Results: Results reveal adolescentsâ heightened sensitivity to AI robotsâ certainty levels: answer consistency reached 94.4% under âcertainâ robot expressions versus 71.4% under âuncertainâ ones. However, overconfident robot expressions increased erroneous persuasion ratesâparticularly among students with prior large language model (LLM) experience. The study identifies a âreliabilityâcertainty mismatchâ risk: when expressed certainty exceeds actual reliability, trust and learning outcomes deteriorate. To address this, we propose a âreliabilityâcertainty alignmentâ design principle. These findings provide empirical evidence and methodological guidance for human-centered design of trustworthy educational robots, advancing the integration of affective and cognitive cues in pedagogical AI systems.
đ Abstract
This study with 40 high-school students demonstrates the high influence of a social educational robot on students' decision-making for a set of eight true-false questions on electric circuits, for which the theory had been covered in the students' courses. The robot argued for the correct answer on six questions and the wrong on two, and 75% of the students were persuaded by the robot to perform beyond their expected capacity, positively when the robot was correct and negatively when it was wrong. Students with more experience of using large language models were even more likely to be influenced by the robot's stance -- in particular for the two easiest questions on which the robot was wrong -- suggesting that familiarity with AI can increase susceptibility to misinformation by AI. We further examined how three different levels of portrayed robot certainty, displayed using semantics, prosody and facial signals, affected how the students aligned with the robot's answer on specific questions and how convincing they perceived the robot to be on these questions. The students aligned with the robot's answers in 94.4% of the cases when the robot was portrayed as Certain, 82.6% when it was Neutral and 71.4% when it was Uncertain. The alignment was thus high for all conditions, highlighting students' general susceptibility to accept the robot's stance, but alignment in the Uncertain condition was significantly lower than in the Certain. Post-test questionnaire answers further show that students found the robot most convincing when it was portrayed as Certain. These findings highlight the need for educational robots to adjust their display of certainty based on the reliability of the information they convey, to promote students' critical thinking and reduce undue influence.