🤖 AI Summary
Current user trust in chatbots often stems from cognitive biases induced by interaction design, conflating normative trust with behavioral trust and thereby obscuring ethical and cognitive issues inherent in human–AI interaction. This study draws on cognitive psychology and human–computer interaction analysis to clearly distinguish these two forms of trust for the first time. It proposes a novel conceptualization of chatbots as “highly skilled sales agents” operating with organizational objectives. Rather than relying on specific algorithms, the work develops a conceptual framework that elucidates how design strategies shape user trust, offering a theoretical foundation for understanding trust formation mechanisms. The study further calls for the development of mechanisms that support users in appropriately calibrating their trust in conversational AI systems.
📝 Abstract
As chatbots increasingly blur the boundary between automated systems and human conversation, the foundations of trust in these systems warrant closer examination. While regulatory and policy frameworks tend to define trust in normative terms, the trust users place in chatbots often emerges from behavioral mechanisms. In many cases, this trust is not earned through demonstrated trustworthiness but is instead shaped by interactional design choices that leverage cognitive biases to influence user behavior. Based on this observation, we propose reframing chatbots not as companions or assistants, but as highly skilled salespeople whose objectives are determined by the deploying organization. We argue that the coexistence of competing notions of"trust"under a shared term obscures important distinctions between psychological trust formation and normative trustworthiness. Addressing this gap requires further research and stronger support mechanisms to help users appropriately calibrate trust in conversational AI systems.