Precision Autotuning for Linear Solvers via Reinforcement Learning

📅 2026-01-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the trade-off between accuracy and efficiency in solving linear systems by formulating adaptive precision tuning as a contextual bandit problem for the first time. Leveraging reinforcement learning, the method dynamically selects numerical precision at each iteration step based on a discrete state space that encodes system characteristics—such as condition number and matrix norm—into precision configurations via a Q-table. An ε-greedy strategy optimizes a multi-objective reward balancing computational cost and solution accuracy. Experimental results demonstrate that the approach achieves solution accuracy comparable to double precision while significantly reducing computational overhead, and exhibits strong generalization across diverse, previously unseen datasets, thereby advancing the development of mixed-precision numerical methods.

Technology Category

Application Category

📝 Abstract
We propose a reinforcement learning (RL) framework for adaptive precision tuning of linear solvers, and can be extended to general algorithms. The framework is formulated as a contextual bandit problem and solved using incremental action-value estimation with a discretized state space to select optimal precision configurations for computational steps, balancing precision and computational efficiency. To verify its effectiveness, we apply the framework to iterative refinement for solving linear systems $Ax = b$. In this application, our approach dynamically chooses precisions based on calculated features from the system. In detail, a Q-table maps discretized features (e.g., approximate condition number and matrix norm)to actions (chosen precision configurations for specific steps), optimized via an epsilon-greedy strategy to maximize a multi-objective reward balancing accuracy and computational cost. Empirical results demonstrate effective precision selection, reducing computational cost while maintaining accuracy comparable to double-precision baselines. The framework generalizes to diverse out-of-sample data and offers insight into utilizing RL precision selection for other numerical algorithms, advancing mixed-precision numerical methods in scientific computing. To the best of our knowledge, this is the first work on precision autotuning with RL and verified on unseen datasets.
Problem

Research questions and friction points this paper is trying to address.

precision autotuning
linear solvers
reinforcement learning
mixed-precision computing
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning
precision autotuning
linear solvers
mixed-precision computing
contextual bandit
🔎 Similar Papers