🤖 AI Summary
Current AI education systems predominantly focus on information delivery, lacking human-like pedagogical capabilities. To address this, we propose “pedagogical instruction following” — a novel paradigm that decouples teaching competence into modular, dynamically configurable instruction-following tasks, eliminating reliance on predefined pedagogical theories and enabling educators or developers to flexibly specify teaching behaviors via system-level instructions. Leveraging Gemini 1.5 Pro, we construct a hybrid post-training dataset integrating pedagogy-oriented supervised fine-tuning with diverse, multi-source pedagogical instruction-following data. The resulting model, LearnLM, achieves significant improvements over strong baselines: +31% preference rate over GPT-4o, +11% over Claude 3.5, and +13% over the base Gemini 1.5 Pro across multiple learning-centric benchmarks. LearnLM has been deployed in Google AI Studio.
📝 Abstract
Today's generative AI systems are tuned to present information by default rather than engage users in service of learning as a human tutor would. To address the wide range of potential education use cases for these systems, we reframe the challenge of injecting pedagogical behavior as one of extit{pedagogical instruction following}, where training and evaluation examples include system-level instructions describing the specific pedagogy attributes present or desired in subsequent model turns. This framing avoids committing our models to any particular definition of pedagogy, and instead allows teachers or developers to specify desired model behavior. It also clears a path to improving Gemini models for learning -- by enabling the addition of our pedagogical data to post-training mixtures -- alongside their rapidly expanding set of capabilities. Both represent important changes from our initial tech report. We show how training with pedagogical instruction following produces a LearnLM model (available on Google AI Studio) that is preferred substantially by expert raters across a diverse set of learning scenarios, with average preference strengths of 31% over GPT-4o, 11% over Claude 3.5, and 13% over the Gemini 1.5 Pro model LearnLM was based on.