- Published papers: 'OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement' [NeurIPS 2025], among others.
- Project contributions: such as internal contribution to Google LLMs through 'Ultra-fast Exploration for Scalable Agentic Reasoner'.
- Delivered invited talks at several key conferences, e.g., research talk on synthetic data for LLM post-training at Anuttacon.
Research Experience
- 2025: Member of Technical Staff at xAI, working on multi-modal large language models.
- 2025: Student Researcher at Google DeepMind, focusing on synthetic data for LLM post-training.
- 2025: Student Researcher at Google LLC, researching reinforcement learning for LLM agentic reasoning.
- 2024: Research Intern at Microsoft Research, focused on LLM self-training for math reasoning.
- 2023: Applied Scientist Intern at Amazon AWS, working on large language model reasoning with knowledge graphs.
Education
Ph.D. in Computer Science from UCLA, advisor: Prof. Wei Wang; M.S. in Computer Science from UCLA; B.S. in Mathematics from UCLA, during which he was a student researcher at the UCLA-NLP group, advisor: Prof. Kai-Wei Chang.
Background
Research Interests: Multi-modal Large Language Models. Professional Field: Computer Science. Background: Pursued Ph.D. studies at the Department of Computer Science, UCLA, under the guidance of Prof. Wei Wang.
Miscellany
Personal interests: Maintains most of his study notes on LLM.