Generalized Orders of Magnitude for Scalable, Parallel, High-Dynamic-Range Computation

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long sequences of real-valued accumulations—such as matrix chain multiplication in deep learning and financial modeling—are prone to floating-point overflow/underflow, severely constraining dynamic range and numerical stability. To address this, we propose Generalized Order-of-Magnitude Representations (GOOMs), extending the classical magnitude concept to the floating-point domain. GOOMs integrate a selective reset mechanism and a customized parallel prefix-scan algorithm, enabling high-stability, low-overhead parallel numerical computation on GPUs. Our approach is the first to support efficient and numerically stable estimation of Lyapunov spectra and long-range dependency modeling in non-diagonal recurrent neural networks. Experiments demonstrate that GOOMs surpass IEEE 754 floating-point limits in matrix chain multiplication and spectral estimation, achieving over a 10³⁰⁰-fold expansion in dynamic range while preserving full precision—enabling large-scale, previously infeasible, stable computations.

Technology Category

Application Category

📝 Abstract
Many domains, from deep learning to finance, require compounding real numbers over long sequences, often leading to catastrophic numerical underflow or overflow. We introduce generalized orders of magnitude (GOOMs), a principled extension of traditional orders of magnitude that incorporates floating-point numbers as a special case, and which in practice enables stable computation over significantly larger dynamic ranges of real numbers than previously possible. We implement GOOMs, along with an efficient custom parallel prefix scan, to support native execution on parallel hardware such as GPUs. We demonstrate that our implementation of GOOMs outperforms traditional approaches with three representative experiments, all of which were previously considered impractical or impossible, and now become possible and practical: (1) compounding real matrix products far beyond standard floating-point limits; (2) estimating spectra of Lyapunov exponents in parallel, orders of magnitude faster than with previous methods, applying a novel selective-resetting method to prevent state colinearity; and (3) capturing long-range dependencies in deep recurrent neural networks with non-diagonal recurrent states, computed in parallel via a prefix scan, without requiring any form of stabilization. Our results show that our implementation of GOOMs, combined with efficient parallel scanning, offers a scalable and numerically robust alternative to conventional floating-point numbers for high-dynamic-range applications.
Problem

Research questions and friction points this paper is trying to address.

Addressing numerical underflow and overflow in long sequences
Enabling stable computation over large dynamic ranges
Supporting parallel execution for high-dynamic-range applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized orders of magnitude extend floating-point numbers
Custom parallel prefix scan enables native GPU execution
Selective resetting method prevents state colinearity in computation
🔎 Similar Papers
No similar papers found.