New Insights and Algorithms for Optimal Diagonal Preconditioning

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the optimal design of diagonal preconditioners to simultaneously minimize both the classical worst-case condition number κ and the average-case ω-condition number, thereby accelerating convergence in preconditioned conjugate gradient (PCG) methods and optimization algorithms. We propose an affine-transformation-based pseudoconvex reformulation that converts the original nonconvex problem into an efficiently solvable form with only n-dimensional variables. Crucially, we establish for the first time that ω-optimal preconditioners inherently yield significantly reduced κ values, further enhancing PCG convergence. Instead of computationally expensive semidefinite programming (SDP), we employ a subgradient method for scalable and efficient optimization. Experiments demonstrate that our approach outperforms existing SDP-based methods in both scalability and computational efficiency, while delivering superior and more robust convergence acceleration across diverse problem instances.

Technology Category

Application Category

📝 Abstract
Preconditioning (scaling) is essential in many areas of mathematics, and in particular in optimization. In this work, we study the problem of finding an optimal diagonal preconditioner. We focus on minimizing two different notions of condition number: the classical, worst-case type, $κ$-condition number, and the more averaging motivated $ω$-condition number. We provide affine based pseudoconvex reformulations of both optimization problems. The advantage of our formulations is that the gradient of the objective is inexpensive to compute and the optimization variable is just an $n imes 1$ vector. We also provide elegant characterizations of the optimality conditions of both problems. We develop a competitive subgradient method, with convergence guarantees, for $κ$-optimal diagonal preconditioning that scales much better and is more efficient than existing SDP-based approaches. We also show that the preconditioners found by our subgradient method leads to better PCG performance for solving linear systems than other approaches. Finally, we show the interesting phenomenon that we can apply the $ω$-optimal preconditioner to the exact $κ$-optimally diagonally preconditioned matrix $A$ and get consistent, significantly improved convergence results for PCG methods.
Problem

Research questions and friction points this paper is trying to address.

Optimizing diagonal preconditioners to minimize condition numbers
Developing efficient subgradient methods for preconditioning optimization
Improving PCG convergence with novel preconditioning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Affine pseudoconvex reformulations for condition number minimization
Competitive subgradient method for efficient diagonal preconditioning
Novel ω-optimal preconditioner application enhancing PCG convergence
🔎 Similar Papers
No similar papers found.
Saeed Ghadimi
Saeed Ghadimi
Department of Management Science and Engineering, University of Waterloo, ON, Canada
W
Woosuk L. Jung
Department of Combinatorics and Optimization, University of Waterloo, ON, Canada
A
Arnesh Sujanani
Department of Combinatorics and Optimization, University of Waterloo, ON, Canada
D
David Torregrosa-Belén
Department of Mathematics, University of Alicante, Alicante, Spain
Henry Wolkowicz
Henry Wolkowicz
Department of Combinatorics and Optimization, University of Waterloo
OptimizationMathematical ProgrammingNumerical Linear AlgebraCone OptimizationMatrix Completions