Published papers include 'PLAY2PROMPT: Zero-shot Tool Instruction Optimization for LLM Agents via Tool Play' among others, covering a wide range of topics such as tool instruction optimization, passage re-ranking, query re-ranking for open-domain question answering, and interpretable unified language checking methods.
Research Experience
Research Intern at MIT-IBM Watson AI Lab, Jun 2024 – Aug 2024, focused on tool instruction optimization to improve LLM’s tool-using abilities; PhD Student at MIT CSAIL, Sep 2021 – Present, worked on various NLP and speech projects; Research Intern at Apple Inc, twice, Jul 2018 – Aug 2018 and Jun 2017 – Sep 2017, focused on generative modeling and input representations for language modeling; Undergraduate Student at National Taiwan University (NTU), Sep 2013 – Jan 2018, major in Electrical Engineering, also served as a Teaching Assistant and participated in multiple lab researches.
Education
Ph.D., EECS, 2025 (Expected), Massachusetts Institute of Technology; S.M., EECS, 2021, Massachusetts Institute of Technology; B.S.E., Electrical Engineering, 2018, National Taiwan University. Advisors: Dr. James Glass (PhD and Master's); Prof. Hung-Yi Lee, Prof. Lin-Shan Lee (for speech and language understanding at NTU); Prof. Yu-Chiang Frank Wang (for computer vision problems at NTU).
Background
Research interests: Artificial Intelligence, Natural Language Processing, Large Language Models, Machine Learning. Current research explores augmented LLMs, including but not limited to improving LLM agents' retrieval, planning, and tool-using capabilities.