Published multiple papers such as 'When Benchmarks Age: Temporal Misalignment through Large Language Model Factuality Evaluation', 'BiasFreeBench: a Benchmark for Mitigating Bias in Large Language Model Responses', etc. Participated in various projects like 'Improving In-Context Learning with Reasoning Distillation', etc.
Research Experience
Research Scientist Intern at Adobe Research, June 2025 -- Sept. 2025, San Jose, CA, US; Research Intern at Georgia Institute of Technology, NLP X Lab, July 2023 -- Feb. 2024, Remote.
Education
Ph.D. in Computer Science and Engineering at UCSD since Sep. 2024, advised by Prof. Julian McAuley; M.E. in Computer Technology from Zhejiang University, Sep. 2021 - Jun. 2024, advised by Prof. Ningyu Zhang; B.E. in Computer Science and Technology from Lanzhou University, Sep. 2017 - Jun. 2021.
Background
Research interests include: updating and controlling LLM behaviors, trustworthy NLP (LLM reasoning, LLM interpretability and factuality, fairness and toxicity), music science (interpretation, evaluation, and generation).
Miscellany
Interned at Microsoft Research Asia, collaborating with Xu Tan.