nncase: An End-to-End Compiler for Efficient LLM Deployment on Heterogeneous Storage Architectures

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency and high adaptation cost of deploying large language models (LLMs) on heterogeneous memory architectures using fragmented traditional compiler workflows, this paper proposes the first end-to-end heterogeneous compilation framework tailored for LLMs. Our method introduces: (1) an e-graph–driven global policy search that jointly optimizes computation, data movement, and memory layout to resolve phase-ordering challenges; (2) three automated modules—Auto Vectorize, Auto Distribution, and Auto Schedule—that enable cross-hardware collaborative optimization; and (3) integrated cost-aware parallel policy search, on-chip cache locality scheduling, and buffer-aware code generation. Evaluated on the Qwen3 series, our framework outperforms MLC LLM and Intel IPEX, achieving CPU performance comparable to hand-optimized llama.cpp. This demonstrates the feasibility of fully automated compilation for high-performance LLM deployment on heterogeneous memory systems.

Technology Category

Application Category

📝 Abstract
The efficient deployment of large language models (LLMs) is hindered by memory architecture heterogeneity, where traditional compilers suffer from fragmented workflows and high adaptation costs. We present nncase, an open-source, end-to-end compilation framework designed to unify optimization across diverse targets. Central to nncase is an e-graph-based term rewriting engine that mitigates the phase ordering problem, enabling global exploration of computation and data movement strategies. The framework integrates three key modules: Auto Vectorize for adapting to heterogeneous computing units, Auto Distribution for searching parallel strategies with cost-aware communication optimization, and Auto Schedule for maximizing on-chip cache locality. Furthermore, a buffer-aware Codegen phase ensures efficient kernel instantiation. Evaluations show that nncase outperforms mainstream frameworks like MLC LLM and Intel IPEX on Qwen3 series models and achieves performance comparable to the hand-optimized llama.cpp on CPUs, demonstrating the viability of automated compilation for high-performance LLM deployment. The source code is available at https://github.com/kendryte/nncase.
Problem

Research questions and friction points this paper is trying to address.

Unifies optimization across diverse memory architectures for LLMs
Mitigates phase ordering problem via e-graph-based term rewriting
Automates parallel strategies and cache locality for efficient deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end compilation framework for heterogeneous targets
E-graph-based term rewriting engine for global optimization
Integrated modules for vectorization, distribution, and scheduling
🔎 Similar Papers
No similar papers found.
H
Hui Guo
Canaan Inc.
Q
Qihang Zheng
Canaan Inc.
C
Chenghai Huo
Canaan Inc.
Dongliang Guo
Dongliang Guo
University of Virginia
H
Haoqi Yang
Canaan Inc.
Y
Yang Zhang
Canaan Inc.