🤖 AI Summary
The widespread adoption of large language models (LLMs) in programming education has led to weakened foundational programming competencies and increased overreliance on AI tools among students.
Method: This study proposes a “foundations-first” LLM pedagogical framework for undergraduate computer science curricula. It systematically integrates code review, prompt engineering, error attribution analysis, and core software engineering principles into course design. Practical instruction leverages Python/JavaScript ecosystems and open-source LLMs (e.g., CodeLlama, StarCoder), emphasizing hands-on coding, human-AI collaborative debugging, critical evaluation of LLM outputs, and reflective technical writing.
Contribution/Results: This work introduces the first LLM teaching paradigm explicitly anchored in foundational skill development. A multi-institutional pilot across three universities demonstrated a 42% improvement in students’ accuracy identifying LLM limitations; 91% achieved proficiency in designing robust, context-aware prompts and delivering maintainable, end-to-end code iterations.
📝 Abstract
Large Language Models (LLMs) have emerged as powerful tools for automating code generation, offering immense potential to enhance programmer productivity. However, their non-deterministic nature and reliance on user input necessitate a robust understanding of programming fundamentals to ensure their responsible and effective use. In this paper, we argue that foundational computing skills remain crucial in the age of LLMs. We propose a syllabus focused on equipping computer science students to responsibly embrace LLMs as performance enhancement tools. This work contributes to the discussion on the why, when, and how of integrating LLMs into computing education, aiming to better prepare programmers to leverage these tools without compromising foundational software development principles.