🤖 AI Summary
This study investigates the pedagogical efficacy of AI-driven learning assistants in undergraduate civil and environmental engineering education, focusing on student engagement, ethical concerns, and policy awareness. Employing a mixed-methods design—including pre-post surveys, system log analysis, qualitative coding of open-ended responses, and thematic modeling—the research systematically identifies a structural tension between students’ ethical anxieties and the ambiguity of existing institutional AI policies—a novel finding for engineering education. It proposes a triadic framework—“usability–policy clarity–instructor scaffolding”—to guide theoretically grounded and practically implementable AI integration. Results indicate that nearly half of students perceive AI tutors as more usable than human teaching assistants; AI receives strong endorsement for homework support and conceptual clarification, yet evaluations of instructional quality remain markedly divergent. Crucially, ethical uncertainty emerges as the primary barrier to sustained student engagement.
📝 Abstract
As generative AI tools become increasingly integrated into higher education, understanding how students interact with and perceive these technologies is essential for responsible and effective adoption. This study evaluates the use of the Educational AI Hub, an AI-powered learning framework, in undergraduate civil and environmental engineering courses at a large R1 public university. Using a mixed-methods approach that combines pre- and post-surveys, system usage logs, and qualitative analysis of the open-ended prompts and questions students posed to the AI chatbot, the research explores students' perceptions of trust, ethical concerns, usability, and learning outcomes. Findings reveal that students appreciated the AI assistant for its convenience and comfort, with nearly half reporting greater ease in using the AI tool compared to seeking help from instructors or teaching assistants. The tool was seen as most helpful for completing homework and understanding course concepts, though perceptions of its instructional quality were mixed. Ethical concerns emerged as a key barrier to full engagement: while most students viewed AI use as ethically acceptable, many expressed uncertainties about institutional policies and apprehension about potential academic misconduct. This study contributes to the growing body of research on AI in education by highlighting the importance of usability, policy clarity, and faculty guidance in fostering meaningful AI engagement. The findings suggest that while students are ready to embrace AI as a supplement to human instruction, thoughtful integration and transparent institutional frameworks are critical for ensuring student confidence, trust, and learning effectiveness.