Published multiple papers, including but not limited to 'SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models'. Some works have been recognized at top conferences like ICLR 2025 and NeurIPS 2025.
Research Experience
Interned at Meta FAIR and Genentech; her research focuses on developing controllable and efficient generative models via reinforcement learning, multi-modal learning, and representation learning, applied to language models, vision, and scientific data (e.g., biochemistry).
Education
Currently a fourth-year PhD candidate at MIT CSAIL, advised by Prof. Tommi Jaakkola. She obtained her Bachelor’s degree from Tsinghua University, working as a research assistant under the supervision of Mingsheng Long. Also worked as a research intern with Mengdi Wang at Princeton University and Cyrus Shahabi at the University of Southern California.
Background
Research interests lie broadly in deep generative models, reinforcement learning, multi-modal learning, and AI for science. During her PhD, she interned at Meta FAIR and Genentech, supported by the Citadel GQS PhD Fellowship.
Miscellany
Links to Google Scholar, LinkedIn, and Twitter are provided on her personal website, along with an updated resume as of October 2025.