Mixed-Precision Conjugate Gradient Solvers with RL-Driven Precision Tuning

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the accuracy-efficiency trade-off in preconditioned conjugate gradient (PCG) methods for solving large-scale sparse linear systems. We propose the first reinforcement learning–based mixed-precision dynamic scheduling framework, modeling precision selection per iteration as a Markov decision process and employing Q-learning to enable zero-shot, cross-problem adaptive scheduling without retraining. Critical scalar operations and residual verification are rigorously maintained in double precision to ensure numerical stability. Evaluated on diverse real-world sparse matrices, our method achieves an average 1.8× speedup with accuracy loss below 1e−12, demonstrating strong generalization—no retraining or manual analysis is required for new problems. The core contribution lies in pioneering the integration of reinforcement learning into mixed-precision numerical algorithm design, thereby unifying numerical stability, computational efficiency, and problem-agnostic adaptability.

Technology Category

Application Category

📝 Abstract
This paper presents a novel reinforcement learning (RL) framework for dynamically optimizing numerical precision in the preconditioned conjugate gradient (CG) method. By modeling precision selection as a Markov Decision Process (MDP), we employ Q-learning to adaptively assign precision levels to key operations, striking an optimal balance between computational efficiency and numerical accuracy, while ensuring stability through double-precision scalar computations and residual computing. In practice, the algorithm is trained on a set of data and subsequently performs inference for precision selection on out-of-sample data, without requiring re-analysis or retraining for new datasets. This enables the method to adapt seamlessly to new problem instances without the computational overhead of recalibration. Our results demonstrate the effectiveness of RL in enhancing solver's performance, marking the first application of RL to mixed-precision numerical methods. The findings highlight the approach's practical advantages, robustness, and scalability, providing valuable insights into its integration with iterative solvers and paving the way for AI-driven advancements in scientific computing.
Problem

Research questions and friction points this paper is trying to address.

Optimizing numerical precision in CG method using RL
Balancing computational efficiency and numerical accuracy adaptively
Enabling seamless adaptation to new problem instances
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL-driven dynamic precision tuning for CG
Q-learning optimizes precision and efficiency
Double-precision ensures stability in computations
🔎 Similar Papers
No similar papers found.