Can Large Language Models Bridge the Gap in Environmental Knowledge?

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the feasibility of large language models (LLMs) in bridging environmental knowledge gaps among undergraduate students. To this end, we systematically evaluate GPT-4, Gemini 1.5, Claude 3 Sonnet, and Llama 2 using the standardized Environmental Knowledge Test (EKT-19) and domain-specific questions—marking the first multi-model benchmark assessment for environmental education. Evaluation criteria include knowledge coverage, answer accuracy, and pedagogical appropriateness. Results indicate that mainstream LLMs possess robust and broad foundational knowledge in environmental science, rendering them suitable for supporting introductory instruction; however, persistent factual inaccuracies and contextual misapplications necessitate expert verification. Our key contribution is the development of the first dedicated, multi-model benchmark framework for environmental education, empirically delineating the capabilities and limitations of AI-assisted teaching. This work provides evidence-based guidance and methodological foundations for designing and deploying educational AI tools.

Technology Category

Application Category

📝 Abstract
This research investigates the potential of Artificial Intelligence (AI) models to bridge the knowledge gap in environmental education among university students. By focusing on prominent large language models (LLMs) such as GPT-3.5, GPT-4, GPT-4o, Gemini, Claude Sonnet, and Llama 2, the study assesses their effectiveness in conveying environmental concepts and, consequently, facilitating environmental education. The investigation employs a standardized tool, the Environmental Knowledge Test (EKT-19), supplemented by targeted questions, to evaluate the environmental knowledge of university students in comparison to the responses generated by the AI models. The results of this study suggest that while AI models possess a vast, readily accessible, and valid knowledge base with the potential to empower both students and academic staff, a human discipline specialist in environmental sciences may still be necessary to validate the accuracy of the information provided.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' ability to teach environmental concepts
Comparing AI and student environmental knowledge
Evaluating need for human experts in AI education
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs like GPT-4 for environmental education
Assessing AI models with EKT-19 test
Human specialists verify AI-generated environmental knowledge
🔎 Similar Papers
No similar papers found.
Linda Smail
Linda Smail
Zayed University
StatisticsprobabilityBayesian networks
D
David Santandreu Calonge
Department of Academic Development, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
Firuz Kamalov
Firuz Kamalov
Canadian University Dubai
operator algebrasmachine learningmathematical financenumerical analysiseducation
N
Nur H. Orak
Department of Environmental Engineering, Marmara University, Istanbul, Türkiye