🤖 AI Summary
Manual cross-version migration of out-of-tree Linux kernel patches is labor-intensive and costly. Method: We propose the first LLM-driven automated migration framework for real-world scenarios, featuring (i) a semantics-preserving code fingerprint encoding scheme; (ii) a tri-module collaborative architecture integrating context-aware retrieval, precise migration-point localization, and patch rewriting generation; and (iii) KernelPatchBench—the first real-world benchmark for out-of-tree patch migration. Results: On authentic out-of-tree patches, our framework achieves a 72.59% average migration success rate, outperforming baseline LLMs by 50.74%. It significantly improves robustness and accuracy under incomplete context and complex kernel evolution scenarios, demonstrating substantial practical viability for industrial-scale kernel maintenance.
📝 Abstract
Out-of-tree kernel patches are essential for adapting the Linux kernel to new hardware or enabling specific functionalities. Maintaining and updating these patches across different kernel versions demands significant effort from experienced engineers. Large language models (LLMs) have shown remarkable progress across various domains, suggesting their potential for automating out-of-tree kernel patch migration. However, our findings reveal that LLMs, while promising, struggle with incomplete code context understanding and inaccurate migration point identification. In this work, we propose MigGPT, a framework that employs a novel code fingerprint structure to retain code snippet information and incorporates three meticulously designed modules to improve the migration accuracy and efficiency of out-of-tree kernel patches. Furthermore, we establish a robust benchmark using real-world out-of-tree kernel patch projects to evaluate LLM capabilities. Evaluations show that MigGPT significantly outperforms the direct application of vanilla LLMs, achieving an average completion rate of 72.59% (50.74% improvement) for migration tasks.