Campus AI vs Commercial AI: A Late-Breaking Study on How LLM As-A-Service Customizations Shape Trust and Usage Patterns

📅 2025-05-15
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how non-technical, user-perceptible customizations of institutional LLM-as-a-Service (LLMaaS)—such as UI localization and organizational branding—affect faculty and students’ trust and usage behavior, compared to commercial models like ChatGPT. Employing a mixed-methods approach—including semi-structured interviews, contextualized surveys, cognitive walkthroughs, validated psychological scales (trust, perceived control, technology acceptance), and behavioral log analysis—the research identifies three key customization dimensions—authority, affinity, and controllability—at the UI/branding level that significantly enhance initial trust and sustained adoption intention. It is the first to empirically establish these human-centered design levers in LLMaaS contexts, addressing a critical gap in LLM human factors research. Findings shift the paradigm for trustworthy AI deployment from purely technical adaptation toward experience-centered design, providing both theoretical grounding in user psychology and empirically validated tools for organizational AI implementation.

Technology Category

Application Category

📝 Abstract
As the use of Large Language Models (LLMs) by students, lecturers and researchers becomes more prevalent, universities - like other organizations - are pressed to develop coherent AI strategies. LLMs as-a-Service (LLMaaS) offer accessible pre-trained models, customizable to specific (business) needs. While most studies prioritize data, model, or infrastructure adaptations (e.g., model fine-tuning), we focus on user-salient customizations, like interface changes and corporate branding, which we argue influence users' trust and usage patterns. This study serves as a functional prequel to a large-scale field study in which we examine how students and employees at a German university perceive and use their institution's customized LLMaaS compared to ChatGPT. The goals of this prequel are to stimulate discussions on psychological effects of LLMaaS customizations and refine our research approach through feedback. Our forthcoming findings will deepen the understanding of trust dynamics in LLMs, providing practical guidance for organizations considering LLMaaS deployment.
Problem

Research questions and friction points this paper is trying to address.

Examining how LLMaaS customizations affect user trust and usage
Comparing institutional LLMaaS perceptions versus commercial AI like ChatGPT
Exploring psychological impacts of interface and branding changes in LLMaaS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Focus on user-salient customizations like interface changes
Compare institution's customized LLMaaS with ChatGPT
Study psychological effects of LLMaaS customizations
🔎 Similar Papers
No similar papers found.