🤖 AI Summary
To address the limited acceleration of LoRA fine-tuning in FP8 low-bit training—caused by frequent quantization/dequantization overhead—this paper proposes FALQON, a tightly integrated framework that fuses LoRA adapters with FP8-quantized backbone models and restructures the end-to-end forward-backward computation flow. It introduces a novel row-wise proxy gradient update mechanism to eliminate per-layer recomputation of quantization parameters, substantially reducing quantization overhead. Additionally, it incorporates a quantization-aware propagation algorithm enabling fully FP8 training without mixed-precision fallbacks. Experiments demonstrate that FALQON achieves comparable accuracy to full-precision LoRA while accelerating training by approximately 3×. Crucially, it eliminates the need for post-training quantization, enhancing deployment efficiency and compatibility with FP8 inference pipelines.
📝 Abstract
Low-bit floating-point (FP) formats, such as FP8, provide significant acceleration and memory savings in model training thanks to native hardware support on modern GPUs and NPUs. However, we analyze that FP8 quantization offers speedup primarily for large-dimensional matrix multiplications, while inherent quantization overheads diminish speedup when applied to low-rank adaptation (LoRA), which uses small-dimensional matrices for efficient fine-tuning of large language models (LLMs). To address this limitation, we propose FALQON, a novel framework that eliminates the quantization overhead from separate LoRA computational paths by directly merging LoRA adapters into an FP8-quantized backbone during fine-tuning. Furthermore, we reformulate the forward and backward computations for merged adapters to significantly reduce quantization overhead, and introduce a row-wise proxy update mechanism that efficiently integrates substantial updates into the quantized backbone. Experimental evaluations demonstrate that FALQON achieves approximately a 3$ imes$ training speedup over existing quantized LoRA methods with a similar level of accuracy, providing a practical solution for efficient large-scale model fine-tuning. Moreover, FALQON's end-to-end FP8 workflow removes the need for post-training quantization, facilitating efficient deployment. Code is available at https://github.com/iamkanghyunchoi/falqon.