🤖 AI Summary
FP4 quantization in large language model (LLM) pretraining suffers from gradient instability and poor convergence due to limited representational capacity. Method: We propose the first end-to-end stable FP4 training paradigm, featuring a module- and phase-aware mixed-precision quantization strategy: multi-head attention and linear layers are assigned FP4, FP8, or BF16 precision based on numerical sensitivity; fine-grained gradient clipping and rescaling mitigate FP4 gradient distortion. Contribution/Results: Experiments show that, at equivalent model scale, our method achieves convergence accuracy comparable to BF16 and FP8 baselines while substantially reducing theoretical computational cost. This work establishes the first stable FP4 pretraining framework, enabling efficient LLM training for next-generation low-precision hardware.
📝 Abstract
The burgeoning computational demands for training large language models (LLMs) necessitate efficient methods, including quantized training, which leverages low-bit arithmetic operations to reduce costs. While FP8 precision has shown potential, leveraging FP4 remains challenging due to inherent quantization errors and limited representation capability. Based on the Transformer architecture, we present an FP4 training scheme for LLMs, overcoming these obstacles through mixed-precision quantization strategies tailed for different modules and training stages. This allows us to apply the precision level suitable to distinct components within the model, ensuring that multi-head attention and linear layers are handled appropriately. Our pretraining recipe ensures stability in backpropagation by incorporating fine-grained quantization methods with a target precision training schedule. Experimental results demonstrate that our FP4 training scheme achieves accuracy comparable to BF16 and FP8, with smaller theoretical computational cost. With the advent of next-generation hardware supporting FP4, our method sets the foundation for efficient ultra-low precision training.