Published 'K/DA: Automated Data Generation Pipeline for Detoxifying Implicitly Offensive Language in Korean' in the Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, 2025. Submitted a patent application for 'Method for Detoxifying Implicitly Offensive Language in Korean using LLMs' (under review). Won several competitions, including 2nd place in AIKU 2024 Summer Project Competition, 1st place in AIKU 2024 Winter Project Competition, and more.
Research Experience
Participated in multiple projects such as: Don’t Be Shy: Relating LM Metrics to Speaker Confidence, DPO-based Korean Language Detoxification with Context-Appropriate Emojis, and Outpainting with Edge Generation, among others.
Education
B.S. in Computer Science & Engineering and B.S. in Biology from Korea University (Mar. 2021 - Feb. 2026 expected). Exchange student in Computer Science at the University of Texas at Austin (Aug. 2024 - Mar. 2025). Recipient of KIAT-IIE STEM Exchange Student Grant ($1,800).
Background
An undergraduate at Korea University, double majoring in Computer Science and Engineering and Biology. Research interests include: (1) learning interpretable representations without supervision that elucidate the structure of black-box latent spaces; (2) aligning modalities across different domains; and (3) addressing ethical issues related to hate speech through multimodal approaches. Outside of academics, she loves discovering feathered gems from around the world as a birder.