Yeongmin Kim
Scholar

Yeongmin Kim

Google Scholar ID: SBF13JUAAAAJ
KAIST
Generative ModelsMachine Learning
Citations & Impact
All-time
Citations
180
 
H-index
5
 
i10-index
3
 
Publications
13
 
Co-authors
23
list available
Resume (English only)
Academic Achievements
  • Publications: [P1] Distillation of Large Language Models via Concrete Score Matching; [C11] Preference Optimization by Estimating the Ratio of the Data Distribution; [C10] Autoregressive Distillation of Diffusion Transformers; [C8] Diffusion Bridge AutoEncoders for Unsupervised Representation Learning; [C5] Training Unbiased Diffusion Models From Biased Dataset; [C3] Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models. Awards: [C10] CVPR 2025 Oral (Top 0.7%); [C8] ICLR 2025 Spotlight (Top 5.1%); [C3] ICML 2023 Oral (Top 2.3%). Academic Services: ICML 2024, 2025 Reviewer; NeurIPS 2024, 2025 Reviewer; ICLR 2025 Reviewer; CVPR 2025 Reviewer; AISTATS 2025 Reviewer; FPI@ICLR 2025 Workshop Reviewer; SPIGM@ICML 2023, 2024, NeurIPS 2025 Workshop Reviewer; NCW@ICML 2023 Workshop Reviewer.
Research Experience
  • Shinhan AI (Mar. 2020 ~ Aug. 2020): Financial AI; Meta GenAI (June 2024 ~ Nov. 2024): Diffusion Model, Efficient Inference, Manager: Artsiom Sanakoyeu.
Education
  • KAIST, Daejeon, Korea, B.S. (Mar. 2017 - Feb. 2022): Department of Industrial & Systems Engineering (ISysE), Computer Science (CS) (Double Major); Dean's List (2019 Fall & 2020 Fall). KAIST, Daejeon, Korea, Integrated MS. & Ph.D. (Mar. 2022 - Feb. 2027 (expected)): Graduate School of Data Science (GSDS), Transferred from an MS program in 2023.
Background
  • Research Interests: Generative AI grounded in probabilistic modeling; Professional Field: Improving the robustness, efficiency, and steerability of diffusion models; Brief Introduction: Currently exploring how probabilistic modeling can enhance the training and inference of large language models.
Miscellany
  • Personal Interests: Actively seeking a research internship in Generative AI for the first half of 2026; Contact: alsdudrla10@kaist.ac.kr; Homepages: Google Scholar, Github.