On Stochastic Rounding with Few Random Bits

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reliance of stochastic rounding (SR) on high-entropy random bits in low-precision (e.g., FP16) and mixed-precision computing, revealing a previously overlooked systematic bias induced by finite-bit random sources (FBSRs)—a bias invisible under infinite-precision theory yet detrimental to numerical reliability. Through rigorous error modeling, floating-point rounding analysis, and empirical training (e.g., ResNet-18), we quantitatively characterize the bias magnitude across multiple FBSR schemes for the first time, demonstrating up to a 1.2% degradation in training accuracy. Our study extends the reliability assessment framework for low-precision computation by explicitly incorporating FBSR-induced bias as a critical dimension. We further propose a low-bit SR implementation framework that explicitly controls bias while preserving efficiency, and release open-source, reproducible code. This work bridges theoretical SR analysis and practical low-precision system design, enabling more robust and predictable stochastic quantization in deep learning accelerators.

Technology Category

Application Category

📝 Abstract
Large-scale numerical computations make increasing use of low-precision (LP) floating point formats and mixed precision arithmetic, which can be enhanced by the technique of stochastic rounding (SR), that is, rounding an intermediate high-precision value up or down randomly as a function of the value's distance to the two rounding candidates. Stochastic rounding requires, in addition to the high-precision input value, a source of random bits. As the provision of high-quality random bits is an additional computational cost, it is of interest to require as few bits as possible while maintaining the desirable properties of SR in a given computation, or computational domain. This paper examines a number of possible implementations of few-bit stochastic rounding (FBSR), and shows how several natural implementations can introduce sometimes significant bias into the rounding process, which are not present in the case of infinite-bit, infinite-precision examinations of these implementations. The paper explores the impact of these biases in machine learning examples, and hence opens another class of configuration parameters of which practitioners should be aware when developing or adopting low-precision floating point. Code is available at http://github.com/graphcore-research/arith25-stochastic-rounding.
Problem

Research questions and friction points this paper is trying to address.

Examining bias in few-bit stochastic rounding implementations
Reducing random bits while maintaining stochastic rounding benefits
Impact of rounding biases in low-precision machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-bit stochastic rounding reduces randomness cost
Examines bias in low-precision rounding implementations
Impacts machine learning with low-precision floating point
🔎 Similar Papers
No similar papers found.