AdaFuse: Accelerating Dynamic Adapter Inference via Token-Level Pre-Gating and Fused Kernel Optimization

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While dynamic adapters—such as those combining LoRA with Mixture-of-Experts (MoE)—enhance the capabilities of large language models, they incur severe inference latency due to frequent fine-grained CUDA kernel invocations, slowing decoding by more than 2.5×. To address this, this work introduces a token-level pre-gating mechanism guided by the principle of “decide once, apply everywhere,” coupled with a custom fused CUDA kernel and LoRA parameter integration technique. This approach effectively staticizes the execution path of dynamic adapters, enabling end-to-end single-kernel inference for the first time. Evaluated on mainstream open-source large language models, the proposed method achieves state-of-the-art accuracy while reducing inference latency by over 2.4×, substantially bridging the gap between model expressiveness and computational efficiency.

Technology Category

Application Category

📝 Abstract
The integration of dynamic, sparse structures like Mixture-of-Experts (MoE) with parameter-efficient adapters (e.g., LoRA) is a powerful technique for enhancing Large Language Models (LLMs). However, this architectural enhancement comes at a steep cost: despite minimal increases in computational load, the inference latency often skyrockets, leading to decoding speeds slowing by over 2.5 times. Through a fine-grained performance analysis, we pinpoint the primary bottleneck not in the computation itself, but in the severe overhead from fragmented, sequential CUDA kernel launches required for conventional dynamic routing. To address this challenge, we introduce AdaFuse, a framework built on a tight co-design between the algorithm and the underlying hardware system to enable efficient dynamic adapter execution. Departing from conventional layer-wise or block-wise routing, AdaFuse employs a token-level pre-gating strategy, which makes a single, global routing decision for all adapter layers before a token is processed. This "decide-once, apply-everywhere" approach effectively staticizes the execution path for each token, creating an opportunity for holistic optimization. We capitalize on this by developing a custom CUDA kernel that performs a fused switching operation, merging the parameters of all selected LoRA adapters into the backbone model in a single, efficient pass. Experimental results on popular open-source LLMs show that AdaFuse achieves accuracy on par with state-of-the-art dynamic adapters while drastically cutting decoding latency by a factor of over 2.4x, thereby bridging the gap between model capability and inference efficiency.
Problem

Research questions and friction points this paper is trying to address.

dynamic adapter
inference latency
CUDA kernel overhead
Mixture-of-Experts
parameter-efficient fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic adapter
token-level pre-gating
fused kernel
CUDA optimization
parameter-efficient fine-tuning
🔎 Similar Papers
No similar papers found.