Alexandru Meterez
Scholar

Alexandru Meterez

Google Scholar ID: wSrCMa4AAAAJ
Harvard University
Machine LearningDeep Learning TheoryOptimization
Citations & Impact
All-time
Citations
100
 
H-index
5
 
i10-index
4
 
Publications
7
 
Co-authors
32
list available
Resume (English only)
Academic Achievements
  • Published and preprinted papers, including:
  • - Under Submission at ICLR2026: Seesaw: Accelerating Training by Balancing Learning Rate and Batch Size Scheduling
  • - (NeurIPS Workshop) OPT 2025: A Simplified Analysis of SGD for Linear Regression with Weight Averaging
  • - COLM 2025: Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining
  • - ICLR 2025: The Optimization Landscape of SGD Across the Feature Learning Strength
  • - NeurIPS 2024: Why do Learning Rates Transfer? Reconciling Optimization and Scaling Limits for Deep Learning
  • - ICLR 2024: Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
  • - RECOMB 2023: Aligning distant sequences to graphs using long seed sketches
Research Experience
  • Interned in various places.
Education
  • Finished Data Science MSc at ETH Zurich working with Gunnar Rätsch and Francesco Orabona (KAUST) and CS BSc at UPB working with Iuliu Vasilescu.
Background
  • CS PhD student at Harvard co-advised by Prof. Cengiz Pehlevan and Prof. Sham Kakade. Main interests are in Deep Learning theory and neural network optimization. Aims to understand how these models learn from data by developing a theoretical framework backed by empirical evidence.
Miscellany
  • Born in Romania. Did a lot of systems programming back in BSc years. Advocate of open-source software. Hobbies include running, reading, swimming, and bike fixing.