Quantitative Insights into Large Language Model Usage and Trust in Academia: An Empirical Study

📅 2024-09-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Empirical evidence on the adoption, trust levels, and governance concerns surrounding large language models (LLMs) in academia remains scarce, hindering evidence-informed policy development. Method: We conducted an empirical survey of 125 scholars from R1 universities, employing structured questionnaires, statistical analyses (correlation tests, frequency and cross-tabulation analyses), and qualitative thematic coding. Contribution/Results: This study is the first to quantitatively establish that LLM adoption among academics stands at 75%; trust exhibits a statistically significant positive correlation with usage intensity—indicating a bidirectional reinforcement effect; and fact-checking is identified as the most urgent challenge. By anchoring AI governance priorities in large-scale empirical data, the findings provide actionable, evidence-based foundations for institutional AI ethics frameworks, teaching/research guidelines, and technology deployment strategies in higher education.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are transforming writing, reading, teaching, and knowledge retrieval in many academic fields. However, concerns regarding their misuse and erroneous outputs have led to varying degrees of trust in LLMs within academic communities. In response, various academic organizations have proposed and adopted policies regulating their usage. However, these policies are not based on substantial quantitative evidence because there is no data about use patterns and user opinion. Consequently, there is a pressing need to accurately quantify their usage, user trust in outputs, and concerns about key issues to prioritize in deployment. This study addresses these gaps through a quantitative user study of LLM usage and trust in academic research and education. Specifically, our study surveyed 125 individuals at a private R1 research university regarding their usage of LLMs, their trust in LLM outputs, and key issues to prioritize for robust usage in academia. Our findings reveal: (1) widespread adoption of LLMs, with 75% of respondents actively using them; (2) a significant positive correlation between trust and adoption, as well as between engagement and trust; and (3) that fact-checking is the most critical concern. These findings suggest a need for policies that address pervasive usage, prioritize fact-checking mechanisms, and accurately calibrate user trust levels as they engage with these models. These strategies can help balance innovation with accountability and help integrate LLMs into the academic environment effectively and reliably.
Problem

Research questions and friction points this paper is trying to address.

Quantify LLM usage in academia
Assess trust in LLM outputs
Identify key issues for LLM deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantitative user study
Trust and adoption correlation
Fact-checking priority
🔎 Similar Papers
No similar papers found.
Minseok Jung
Minseok Jung
Graduate Student, MIT IDSS & CSAIL
Artificial IntelligenceScience and Technology Policy
A
Aurora Zhang
Massachusetts Institute of Technology, USA
M
May Fung
J
Junho Lee
United Nations, USA
P
Paul Pu Liang
Massachusetts Institute of Technology, USA