LearnLM: Improving Gemini for Learning

📅 2024-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI education systems predominantly focus on information delivery, lacking human-like pedagogical capabilities. To address this, we propose “pedagogical instruction following” — a novel paradigm that decouples teaching competence into modular, dynamically configurable instruction-following tasks, eliminating reliance on predefined pedagogical theories and enabling educators or developers to flexibly specify teaching behaviors via system-level instructions. Leveraging Gemini 1.5 Pro, we construct a hybrid post-training dataset integrating pedagogy-oriented supervised fine-tuning with diverse, multi-source pedagogical instruction-following data. The resulting model, LearnLM, achieves significant improvements over strong baselines: +31% preference rate over GPT-4o, +11% over Claude 3.5, and +13% over the base Gemini 1.5 Pro across multiple learning-centric benchmarks. LearnLM has been deployed in Google AI Studio.

Technology Category

Application Category

📝 Abstract
Today's generative AI systems are tuned to present information by default rather than engage users in service of learning as a human tutor would. To address the wide range of potential education use cases for these systems, we reframe the challenge of injecting pedagogical behavior as one of extit{pedagogical instruction following}, where training and evaluation examples include system-level instructions describing the specific pedagogy attributes present or desired in subsequent model turns. This framing avoids committing our models to any particular definition of pedagogy, and instead allows teachers or developers to specify desired model behavior. It also clears a path to improving Gemini models for learning -- by enabling the addition of our pedagogical data to post-training mixtures -- alongside their rapidly expanding set of capabilities. Both represent important changes from our initial tech report. We show how training with pedagogical instruction following produces a LearnLM model (available on Google AI Studio) that is preferred substantially by expert raters across a diverse set of learning scenarios, with average preference strengths of 31% over GPT-4o, 11% over Claude 3.5, and 13% over the Gemini 1.5 Pro model LearnLM was based on.
Problem

Research questions and friction points this paper is trying to address.

AI Education
Knowledge Transmission
Teaching Ability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LearnLM
Educational Enhancement
AI Teaching Methodology
🔎 Similar Papers
No similar papers found.
L
LearnLM Team
Google
A
Abhinit Modi
Google
A
Aditya Srikanth Veerubhotla
Google
A
Aliya Rysbek
Google
A
Andrea Huber
Google
B
Brett Wiltshire
Google
Daniel Gillick
Daniel Gillick
Research Scientist, Google
Natural Language Processing
Daniel Kasenberg
Daniel Kasenberg
Research Scientist, Google DeepMind
Artificial Intelligence
Irina Jurenka
Irina Jurenka
DeepMind
Artificial IntelligenceNeuroscienceUnsupervised LearningGenerative ModelsRepresentation
J
James Cohan
Google
J
Jennifer She
Google
J
Julia Wilkowski
Google
K
Kevin McKee
Google
Lisa Wang
Lisa Wang
DeepMind
EducationMachine LearningIntelligent Tutoring Systems
M
Markus Kunesch
Google
Mike Schaekermann
Mike Schaekermann
Computer Science PhD, Eng BSc, Medicine State Exam I
Human-Computer InteractionMachine LearningMedicine
M
Miruna Pislar
Google
P
Parsa Mahmoudieh
Google
P
Paul Jhun
Google
S
Sara Wiltberger
Google
Shakir Mohamed
Shakir Mohamed
Research Director, Google DeepMind
Machine LearningBayesian StatisticsDeep LearningSociotechnical AIArtificial Intelligence
Shashank Agarwal
Shashank Agarwal
Google
S
Shubham Milind Phal
Google
S
Sun Jae Lee
Google
T
T. Strinopoulos
Google
W
Wei-Jen Ko
Google
A
Amy Wang
Google
Ankit Anand
Ankit Anand
Research Scientist, Google DeepMind
Artificial IntelligenceMachine LearningAlgorithms
A
Avishkar Bhoopchand
Google
D
Dan Wild
Google
D
Divya Pandya
Google
F
Filip Bar
Google
G
Garth Graham
Google
Holger Winnemoeller
Holger Winnemoeller
Google
P
Prateek Kolhar
Google
R
Renee Schneider
Google
S
Shaojian Zhu
Google
S
Stephanie Chan
Google
S
Steve Yadlowsky
Google
V
Viknesh Sounderajah
Google
Yannis Assael
Yannis Assael
Staff Research Scientist, Google DeepMind
Machine LearningDeep LearningNeural NetworksArtificial Intelligence