🤖 AI Summary
This study investigates how large language model (LLM) hallucinations impact user trust and human-AI interaction mechanisms. Method: Drawing on trust theories by Lee & See and Afroogh, we conducted a mixed-methods study comprising a large-scale online survey and in-depth interviews (N=192), followed by thematic coding and theory-driven validation. Contribution/Results: We demonstrate that trust is not binary but a context-sensitive, dynamically calibrated process. Crucially, we identify “intuition” as a novel, empirically grounded hallucination detection cue—extending the recursive trust calibration model. Integrating situational moderators—including risk perception and decision stakes—we derive five trust-related factors and two contextual moderators. The findings yield actionable design principles and practical guidelines for responsible, reflective LLM deployment, thereby advancing both theoretical understanding and empirical grounding for trustworthy AI interaction.
📝 Abstract
Hallucinations are outputs by Large Language Models (LLMs) that are factually incorrect yet appear plausible [1]. This paper investigates how such hallucinations influence users' trust in LLMs and users' interaction with LLMs. To explore this in everyday use, we conducted a qualitative study with 192 participants. Our findings show that hallucinations do not result in blanket mistrust but instead lead to context-sensitive trust calibration. Building on the calibrated trust model by Lee & See [2] and Afroogh et al.'s trust-related factors [3], we confirm expectancy [3], [4], prior experience [3], [4], [5], and user expertise & domain knowledge [3], [4] as userrelated (human) trust factors, and identify intuition as an additional factor relevant for hallucination detection. Additionally, we found that trust dynamics are further influenced by contextual factors, particularly perceived risk [3] and decision stakes [6]. Consequently, we validate the recursive trust calibration process proposed by Blöbaum [7] and extend it by including intuition as a user-related trust factor. Based on these insights, we propose practical recommendations for responsible and reflective LLM use.