Eric Zelikman
Scholar

Eric Zelikman

Google Scholar ID: V5B8dSUAAAAJ
Stanford University
reasoningrepresentation learningmachine learning
Citations & Impact
All-time
Citations
3,826
 
H-index
15
 
i10-index
15
 
Publications
20
 
Co-authors
28
list available
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Published several papers including: 'Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking', 'Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation', 'Hypothesis Search: Inductive Reasoning with Language Models', etc.
Research Experience
  • As an early employee of xAI, key contributor to the pretraining data for Grok 2, initiated and scaled reinforcement learning for reasoning for Grok 3, and built the agent RL infrastructure and recipe for Grok 4. Led and proved out multiple yet-to-be-released experimental efforts.
Education
  • Ph.D. candidate at Stanford, advised by Nick Haber and Noah Goodman.
Background
  • Fascinated by building AI models that truly understand people and has designed algorithms to teach models to reason.
Miscellany
  • Open to connecting with those passionate about building models that understand and empower people instead of automating or replacing them. Offers non-commercial research chats.