🤖 AI Summary
This study investigates how students’ use of large language models (LLMs) affects trust between university faculty and students, focusing on the mediating roles of informational and procedural justice in linking LLM usage transparency, team trust, and expected team performance. Drawing on survey data from 23 faculty members at Ndejje University (Uganda), we employ partial least squares structural equation modeling (PLS-SEM) to test causal pathways. Results provide the first empirical evidence that LLM usage per se does not directly influence trust; rather, transparency in LLM deployment significantly enhances team trust and—through procedural justice—indirectly improves expected team performance. The study challenges the prevailing “prohibition-based regulatory paradigm” and advances a novel “transparent collaboration” framework for AI-integrated pedagogical governance. It contributes both theoretical insights into justice-mediated trust dynamics and a practical roadmap for rebuilding stakeholder trust in AI-augmented higher education.
📝 Abstract
Trust plays a pivotal role in lecturer-student collaboration, encompassing teaching and research aspects. The advent of Large Language Models (LLMs)in platforms like Open AI’s ChatGPT, coupled with their cost-effectiveness and high-quality results, has led to their rapid adoption among university students. However, discerning genuine student input from LLM-generated output poses a challenge for lecturers. This dilemma jeopardizes the trust relationship between lecturers and students, potentially impacting university downstream activities, particularly collaborative research initiatives. Despite attempts to establish guidelines for student LLM use, a clear framework that is mutually beneficial for lecturers and students in higher education remains elusive. This study addresses the research question: How does the use of LLMs by students impact informational Procedural Justice, influencing Team Trust and Expected team performance? Methodically, we applied a quantitative construct-based survey, evaluated using techniques of Structural Equation Modelling (PLSSEM)to examine potential relationships among these constructs. Our findings based on 23 valid respondents from Ndejje University indicate that lecturers are less concerned about the fairness of LLM use per se but are more focused on the transparency of student utilization, which significantly influences Team Trust positively. This research contributes to the global discourse on integrating and regulating LLMs and subsequent models in education. We propose that guidelines should support LLM use while enforcing transparency in lecturer-student collaboration to foster Team Trust and Performance. The study contributes valuable insights for shaping policies enabling ethical and transparent LLMs usage in education to ensure the effectiveness of collaborative learning environments.