Towards Efficient Pre-training: Exploring FP4 Precision in Large Language Models

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
FP4 quantization in large language model (LLM) pretraining suffers from gradient instability and poor convergence due to limited representational capacity. Method: We propose the first end-to-end stable FP4 training paradigm, featuring a module- and phase-aware mixed-precision quantization strategy: multi-head attention and linear layers are assigned FP4, FP8, or BF16 precision based on numerical sensitivity; fine-grained gradient clipping and rescaling mitigate FP4 gradient distortion. Contribution/Results: Experiments show that, at equivalent model scale, our method achieves convergence accuracy comparable to BF16 and FP8 baselines while substantially reducing theoretical computational cost. This work establishes the first stable FP4 pretraining framework, enabling efficient LLM training for next-generation low-precision hardware.

Technology Category

Application Category

📝 Abstract
The burgeoning computational demands for training large language models (LLMs) necessitate efficient methods, including quantized training, which leverages low-bit arithmetic operations to reduce costs. While FP8 precision has shown potential, leveraging FP4 remains challenging due to inherent quantization errors and limited representation capability. Based on the Transformer architecture, we present an FP4 training scheme for LLMs, overcoming these obstacles through mixed-precision quantization strategies tailed for different modules and training stages. This allows us to apply the precision level suitable to distinct components within the model, ensuring that multi-head attention and linear layers are handled appropriately. Our pretraining recipe ensures stability in backpropagation by incorporating fine-grained quantization methods with a target precision training schedule. Experimental results demonstrate that our FP4 training scheme achieves accuracy comparable to BF16 and FP8, with smaller theoretical computational cost. With the advent of next-generation hardware supporting FP4, our method sets the foundation for efficient ultra-low precision training.
Problem

Research questions and friction points this paper is trying to address.

Exploring FP4 precision in LLMs
Overcoming FP4 quantization challenges
Efficient ultra-low precision training
Innovation

Methods, ideas, or system contributions that make the work stand out.

FP4 precision training
mixed-precision quantization strategies
fine-grained quantization methods
J
Jiecheng Zhou
School of Information Science and Technology, University of Science and Technology of China
D
Ding Tang
Shanghai Artificial Intelligence Laboratory
R
Rong Fu
Shanghai Artificial Intelligence Laboratory
Boni Hu
Boni Hu
Northwestern Polytechnical University
visual geolocalizationimage retrieval
H
Haoran Xu
Shanghai Artificial Intelligence Laboratory
Y
Yi Wang
Shanghai Artificial Intelligence Laboratory
Z
Zhilin Pei
Shanghai Artificial Intelligence Laboratory
Z
Zhongling Su
Shanghai Artificial Intelligence Laboratory
L
Liang Liu
Shanghai Artificial Intelligence Laboratory
X
Xingcheng Zhang
Shanghai Artificial Intelligence Laboratory
W
Weiming Zhang
School of Information Science and Technology, University of Science and Technology of China