đ¤ AI Summary
LLM kernel optimization heavily relies on hardware expertsâ domain knowledge; existing LLM-based approaches, lacking such expertise, struggle to balance exploration and exploitation effectively within the vast optimization search space.
Method: We propose the first hardware-aware hierarchical Multi-Armed Bandit (MAB) framework, modeling kernel optimization as a hierarchical MAB problem. It integrates hardware profiling, runtime behavior clustering, and LLM-driven code generation, augmented by a reinforcement learningâbased decision mechanism that dynamically schedules the explorationâexploitation trade-off.
Contribution: Our method significantly outperforms state-of-the-art approaches on TritonBench, delivering substantial improvements in tokens-per-second performance. Optimization efficiency scales continuously with available resourcesâwithout saturationâdemonstrating strong scalability. To our knowledge, this is the first work enabling LLMs to achieve simultaneous automation, scalability, and high performance in hardware-level kernel optimization.
đ Abstract
High quality kernels are critical for reducing training and inference costs of Large Language Models (LLMs), yet they traditionally require significant expertise in hardware architecture and software optimization. While recent advances in LLM-based code generation show promise for complex optimization, existing methods struggle with the vast optimization space due to insufficient hardware domain knowledge, failing to effectively balance exploration and exploitation. We present KernelBand, a novel framework that formulates kernel optimization as a hierarchical multi-armed bandit problem, enabling LLM agents to strategically navigate the optimization space by treating kernel selection and optimization strategy application as sequential decision-making processes. Our approach leverages hardware profiling information to identify promising optimization strategies and employs runtime behavior clustering to reduce exploration overhead across kernel candidates. Extensive experiments on TritonBench demonstrate that KernelBand significantly outperforms state-of-the-art methods, achieving superior performance with fewer tokens while exhibiting consistent improvement without saturation as computational resources increase.