Younghun Lee
Scholar

Younghun Lee

Google Scholar ID: xoTdfCoAAAAJ
Purdue University
Natural Language Processing
Citations & Impact
All-time
Citations
144
 
H-index
5
 
i10-index
1
 
Publications
8
 
Co-authors
1
list available
Resume (English only)
Academic Achievements
  • SOLAR: Towards Characterizing Subjectivity of Individuals through Modeling Value Conflicts and Trade-offs, Main Conference at EMNLP 2025
  • Towards Explaining Subjective Ground of Individuals on Social Media, Findings of EMNLP 2022
  • Towards Understanding Counseling Conversations: Domain Knowledge and Large Language Models, Findings of EACL 2023
  • Comparative Studies of Detecting Abusive Language on Twitter, EMNLP 2018 Workshop (ALW2)
  • Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data, ICML 2024 Workshop on Long-Context Foundation Models
  • Weighted Contrastive Learning With False Negative Control to Help Long-tailed Product Classification, ACL 2023 Industry Track
  • CamemBERT and BiT Feature Extraction for Multimodal Product Classification and Retrieval, SIGIR2020 eCom
Research Experience
  • Research Intern at LG AI Research (Summer 2024), Research Scientist/Engineer Intern at Adobe (Summer 2023), Research Science Intern at Rakuten Institute of Technology (Summer 2020); Teaching Assistant at Purdue University for courses such as CS57100 - Artificial Intelligence, CS57300 - Data Mining, CS24200 - Introduction To Data Science, and CS18200 - Foundations of Computer Science.
Education
  • Ph.D. student in Computer Science at Purdue University, advised by Professor Dan Goldwasser from the Purdue NLP Group.
Background
  • Research interests: Natural Language Processing, Computational Social Science, Representation Learning, and Explainable AI. Specifically, interested in discourse analysis within various domains including hate speech, subjective preference, counseling conversations, etc., and systematic approaches to represent discourse more effectively and explainably to humans as well as to Large Language Models.