Bairu Hou
Scholar

Bairu Hou

Google Scholar ID: wBvDD88AAAAJ
University of California, Santa Barbara
Citations & Impact
All-time
Citations
754
 
H-index
11
 
i10-index
12
 
Publications
16
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • ICML'25: Instruction-Following Pruning for Large Language Models
  • NeurIPS'25: KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
  • Arxiv: ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
  • NAACL'25: A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
  • Arxiv: Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing
  • ICML'24: Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling
  • NAACL'24: Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
  • TMLR: Improving Diffusion Models for Scene Text Editing with Dual Encoders
  • ICML'23: PromptBoosting: Black-Box Text Classification with Ten Forward Passes
  • ICLR'23: TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization
  • ACL'21: OpenAttack: An Open-source Textual Adversarial Attack Toolkit
Research Experience
  • Working on a broad range of topics on LLMs at UC Santa Barbara, including model pruning, improving efficiency, and enhancing adversarial robustness.
Education
  • Ph.D. - University of California, Santa Barbara, Department of Computer Science, Advisor: Prof. Shiyu Chang; Research Assistant in THUNLP Group, supervised by Prof. Zhiyuan Liu from 2018 to 2021.
Background
  • Ph.D. student in the Department of Computer Science at UC Santa Barbara, currently working with Prof. Shiyu Chang. Research interests include model pruning, improving the efficiency of LLMs, and enhancing adversarial robustness.
Co-authors
0 total
Co-authors: 0 (list not available)