KernelBand: Boosting LLM-based Kernel Optimization with a Hierarchical and Hardware-aware Multi-armed Bandit

📅 2025-11-24
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
LLM kernel optimization heavily relies on hardware experts’ domain knowledge; existing LLM-based approaches, lacking such expertise, struggle to balance exploration and exploitation effectively within the vast optimization search space. Method: We propose the first hardware-aware hierarchical Multi-Armed Bandit (MAB) framework, modeling kernel optimization as a hierarchical MAB problem. It integrates hardware profiling, runtime behavior clustering, and LLM-driven code generation, augmented by a reinforcement learning–based decision mechanism that dynamically schedules the exploration–exploitation trade-off. Contribution: Our method significantly outperforms state-of-the-art approaches on TritonBench, delivering substantial improvements in tokens-per-second performance. Optimization efficiency scales continuously with available resources—without saturation—demonstrating strong scalability. To our knowledge, this is the first work enabling LLMs to achieve simultaneous automation, scalability, and high performance in hardware-level kernel optimization.

Technology Category

Application Category

📝 Abstract
High quality kernels are critical for reducing training and inference costs of Large Language Models (LLMs), yet they traditionally require significant expertise in hardware architecture and software optimization. While recent advances in LLM-based code generation show promise for complex optimization, existing methods struggle with the vast optimization space due to insufficient hardware domain knowledge, failing to effectively balance exploration and exploitation. We present KernelBand, a novel framework that formulates kernel optimization as a hierarchical multi-armed bandit problem, enabling LLM agents to strategically navigate the optimization space by treating kernel selection and optimization strategy application as sequential decision-making processes. Our approach leverages hardware profiling information to identify promising optimization strategies and employs runtime behavior clustering to reduce exploration overhead across kernel candidates. Extensive experiments on TritonBench demonstrate that KernelBand significantly outperforms state-of-the-art methods, achieving superior performance with fewer tokens while exhibiting consistent improvement without saturation as computational resources increase.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM kernels to reduce computational costs efficiently
Addressing vast optimization space with hierarchical multi-armed bandit approach
Leveraging hardware awareness to balance exploration and exploitation strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical multi-armed bandit for kernel optimization
Hardware profiling to identify optimization strategies
Runtime behavior clustering reduces exploration overhead
🔎 Similar Papers
No similar papers found.
Dezhi Ran
Dezhi Ran
School of Computer Science, Peking University
Short Video StreamingSoftware TestingProgram Analysis
S
Shuxiao Xie
East China Normal University, Shanghai, China
M
Mingfang Ji
Department of Computer Science, Tianjin University, Tianjin, China
Z
Ziyue Hua
Key Lab of HCST (PKU), MOE; SCS, Peking University, Beijing, China
Mengzhou Wu
Mengzhou Wu
Peking University
Software EngineeringLarge Language Model
Y
Yuan Cao
Key Lab of HCST (PKU), MOE; SCS, Peking University, Beijing, China
Y
Yuzhe Guo
School of Computer Science & Technology, Beijing Jiaotong University, Beijing, China
Y
Yu Hao
Hong Kong University of Science and Technology, Hong Kong, China
L
Linyi Li
School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
Yitao Hu
Yitao Hu
Professor, Tianjin University
LLM SystemDNN SystemAI for Science
T
Tao Xie
Key Lab of HCST (PKU), MOE; SCS, Peking University, Beijing, China