đ¤ AI Summary
Prior research on institutional large language modelâas-a-service (LLMaaS) systems emphasizes technical integration while neglecting human-centered effectsâparticularly how user-visible interface customization influences psychological perceptions. Method: We conducted a field study comparingĺ¸çâ (faculty and studentsâ) trust, privacy perceptions, and hallucination experiences with a campus-customized LLMaaS chatbot versus ChatGPT, using behavioral logs and validated surveys. Contribution/Results: Interface customization significantly enhanced perceived trust and privacy safety while reducing hallucination perceptionâeffects most pronounced among dual-system users concurrently interacting with both platforms. This study provides the first empirical evidence of a âpsychological calibration mechanismâ driven by non-functional interface adaptation, demonstrating that perceptual alignmentânot just functional equivalenceâmediates user acceptance. Findings advance human-centered AI deployment by establishing interface design as a trust-building lever for institutional AI governance, shifting emphasis from technical compliance toward cognitive alignment.
đ Abstract
As the use of LLM chatbots by students and researchers becomes more prevalent, universities are pressed to develop AI strategies. One strategy that many universities pursue is to customize pre-trained LLM as-a-service (LLMaaS). While most studies on LLMaaS chatbots prioritize technical adaptations, we focus on psychological effects of user-salient customizations, such as interface changes. We assume that such customizations influence users' perception of the system and are therefore important in guiding safe and appropriate use. In a field study, we examine how students and employees (N = 526) at a German university perceive and use their institution's customized LLMaaS chatbot compared to ChatGPT. Participants using both systems (n = 116) reported greater trust, higher perceived privacy and less experienced hallucinations with their university's customized LLMaaS chatbot in contrast to ChatGPT. We discuss theoretical implications for research on calibrated trust, and offer guidance on the design and deployment of LLMaaS chatbots.