Campus AI vs. Commercial AI: How Customizations Shape Trust and Usage of LLM as-a-Service Chatbots

📅 2025-09-19
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Prior research on institutional large language model–as-a-service (LLMaaS) systems emphasizes technical integration while neglecting human-centered effects—particularly how user-visible interface customization influences psychological perceptions. Method: We conducted a field study comparing师生’ (faculty and students’) trust, privacy perceptions, and hallucination experiences with a campus-customized LLMaaS chatbot versus ChatGPT, using behavioral logs and validated surveys. Contribution/Results: Interface customization significantly enhanced perceived trust and privacy safety while reducing hallucination perception—effects most pronounced among dual-system users concurrently interacting with both platforms. This study provides the first empirical evidence of a “psychological calibration mechanism” driven by non-functional interface adaptation, demonstrating that perceptual alignment—not just functional equivalence—mediates user acceptance. Findings advance human-centered AI deployment by establishing interface design as a trust-building lever for institutional AI governance, shifting emphasis from technical compliance toward cognitive alignment.

Technology Category

Application Category

📝 Abstract
As the use of LLM chatbots by students and researchers becomes more prevalent, universities are pressed to develop AI strategies. One strategy that many universities pursue is to customize pre-trained LLM as-a-service (LLMaaS). While most studies on LLMaaS chatbots prioritize technical adaptations, we focus on psychological effects of user-salient customizations, such as interface changes. We assume that such customizations influence users' perception of the system and are therefore important in guiding safe and appropriate use. In a field study, we examine how students and employees (N = 526) at a German university perceive and use their institution's customized LLMaaS chatbot compared to ChatGPT. Participants using both systems (n = 116) reported greater trust, higher perceived privacy and less experienced hallucinations with their university's customized LLMaaS chatbot in contrast to ChatGPT. We discuss theoretical implications for research on calibrated trust, and offer guidance on the design and deployment of LLMaaS chatbots.
Problem

Research questions and friction points this paper is trying to address.

Examining how university LLM customizations affect user trust
Comparing customized campus AI versus commercial ChatGPT performance
Investigating psychological impacts of interface changes on LLM perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Customized interface changes enhance trust
University LLMaaS reduces perceived hallucinations
User-salient customizations improve privacy perception
🔎 Similar Papers