🤖 AI Summary
Existing benchmarks for evaluating large language model–based Linux kernel crash repair are static and constrained by the models’ knowledge cutoff dates, failing to capture the kernel’s continuous evolution. To address this limitation, this work proposes Live-kBench, the first time-sensitive, attribute-aware dynamic evaluation framework. Integrated with the standardized execution environment kEnv, Live-kBench enables agent-agnostic, fair assessment by decoupling agent logic from heavyweight execution infrastructure. The framework employs an automated crawler to continuously ingest newly disclosed vulnerabilities and incorporates a feedback mechanism to establish an end-to-end patch evaluation pipeline. Experiments on 534 real-world vulnerabilities show that mainstream agents generate plausible patches on their first attempt in 74% of cases, yet only about 20% closely match developer-authored fixes; integrating the feedback mechanism improves repair accuracy by 29%.
📝 Abstract
Repairing system crashes discovered by kernel fuzzers like Syzkaller is a critical yet underexplored challenge in software engineering. While recent works have introduced Large Language Model (LLM) based agents for Linux kernel crash-resolution, their evaluation benchmarks are usually static and thus, do not capture the evolving nature of the Linux kernel, and suffer from potential data contamination due to LLM knowledge cutoffs. To address the above problem, we present (i) Live-kBench, an evaluation framework for self-evolving benchmarks that continuously scrapes and evaluates agents on freshly discovered kernel bugs, and (ii) kEnv, an agent-agnostic standardized crash-resolution environment for kernel compilation, execution, and feedback. This design decouples agent workflows from heavy-weight execution, enabling fair and scalable comparison across diverse agent frameworks under identical conditions. To this end, we curate an inaugural dataset of 534 Linux kernel bugs and empirically demonstrate a significant performance gap, with agents achieving up to 25% higher equivalent patch rate on bugs fixed before the LLM knowledge cutoff. Using kEnv, we benchmark three state-of-the-art agents, showing that they resolve 74% of crashes on the first attempt (plausible patches); however only ~20% of generated patches closely match developer fixes. Additionally, exposing crash resolution feedback improves crash resolution rate by 29%. Live-kBench provides the community with an evaluation infrastructure for self-evolving benchmarks that is both time and attribute sensitive; complete with a public dashboard to track agent progress on Linux kernel bugs.