đ¤ AI Summary
This study investigates how non-technical, user-perceptible customizations of institutional LLM-as-a-Service (LLMaaS)âsuch as UI localization and organizational brandingâaffect faculty and studentsâ trust and usage behavior, compared to commercial models like ChatGPT. Employing a mixed-methods approachâincluding semi-structured interviews, contextualized surveys, cognitive walkthroughs, validated psychological scales (trust, perceived control, technology acceptance), and behavioral log analysisâthe research identifies three key customization dimensionsâauthority, affinity, and controllabilityâat the UI/branding level that significantly enhance initial trust and sustained adoption intention. It is the first to empirically establish these human-centered design levers in LLMaaS contexts, addressing a critical gap in LLM human factors research. Findings shift the paradigm for trustworthy AI deployment from purely technical adaptation toward experience-centered design, providing both theoretical grounding in user psychology and empirically validated tools for organizational AI implementation.
đ Abstract
As the use of Large Language Models (LLMs) by students, lecturers and researchers becomes more prevalent, universities - like other organizations - are pressed to develop coherent AI strategies. LLMs as-a-Service (LLMaaS) offer accessible pre-trained models, customizable to specific (business) needs. While most studies prioritize data, model, or infrastructure adaptations (e.g., model fine-tuning), we focus on user-salient customizations, like interface changes and corporate branding, which we argue influence users' trust and usage patterns. This study serves as a functional prequel to a large-scale field study in which we examine how students and employees at a German university perceive and use their institution's customized LLMaaS compared to ChatGPT. The goals of this prequel are to stimulate discussions on psychological effects of LLMaaS customizations and refine our research approach through feedback. Our forthcoming findings will deepen the understanding of trust dynamics in LLMs, providing practical guidance for organizations considering LLMaaS deployment.