FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited acceleration of LoRA fine-tuning in FP8 low-bit training—caused by frequent quantization/dequantization overhead—this paper proposes FALQON, a tightly integrated framework that fuses LoRA adapters with FP8-quantized backbone models and restructures the end-to-end forward-backward computation flow. It introduces a novel row-wise proxy gradient update mechanism to eliminate per-layer recomputation of quantization parameters, substantially reducing quantization overhead. Additionally, it incorporates a quantization-aware propagation algorithm enabling fully FP8 training without mixed-precision fallbacks. Experiments demonstrate that FALQON achieves comparable accuracy to full-precision LoRA while accelerating training by approximately 3×. Crucially, it eliminates the need for post-training quantization, enhancing deployment efficiency and compatibility with FP8 inference pipelines.

Technology Category

Application Category

📝 Abstract
Low-bit floating-point (FP) formats, such as FP8, provide significant acceleration and memory savings in model training thanks to native hardware support on modern GPUs and NPUs. However, we analyze that FP8 quantization offers speedup primarily for large-dimensional matrix multiplications, while inherent quantization overheads diminish speedup when applied to low-rank adaptation (LoRA), which uses small-dimensional matrices for efficient fine-tuning of large language models (LLMs). To address this limitation, we propose FALQON, a novel framework that eliminates the quantization overhead from separate LoRA computational paths by directly merging LoRA adapters into an FP8-quantized backbone during fine-tuning. Furthermore, we reformulate the forward and backward computations for merged adapters to significantly reduce quantization overhead, and introduce a row-wise proxy update mechanism that efficiently integrates substantial updates into the quantized backbone. Experimental evaluations demonstrate that FALQON achieves approximately a 3$ imes$ training speedup over existing quantized LoRA methods with a similar level of accuracy, providing a practical solution for efficient large-scale model fine-tuning. Moreover, FALQON's end-to-end FP8 workflow removes the need for post-training quantization, facilitating efficient deployment. Code is available at https://github.com/iamkanghyunchoi/falqon.
Problem

Research questions and friction points this paper is trying to address.

FP8 quantization has limited speedup for LoRA fine-tuning due to overheads
FALQON merges LoRA adapters into FP8 backbone to eliminate quantization overhead
The method achieves 3x training speedup while maintaining similar accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Merges LoRA adapters into FP8-quantized backbone
Reformulates forward-backward computations to reduce overhead
Uses row-wise proxy update for quantized integration
🔎 Similar Papers
No similar papers found.
K
Kanghyun Choi
Department of Electrical and Computer Engineering, Seoul National University
H
Hyeyoon Lee
Department of Electrical and Computer Engineering, Seoul National University
S
SunJong Park
Department of Electrical and Computer Engineering, Seoul National University
D
Dain Kwon
Department of Electrical and Computer Engineering, Seoul National University
Jinho Lee
Jinho Lee
Department of Electrical and Computer Engineering, Seoul National University
Computer architectureComputer systemsMachine learning