🤖 AI Summary
To address the challenges of unfriendly interfaces, cumbersome configuration, and lack of GPU acceleration in large-scale semidefinite programming (SDP) solvers, this work designs and implements cuHALLaR—the first native Julia interface integrating a GPU-accelerated low-rank SDP solver. The interface uniformly supports both the standard SDPA format and the novel hybrid sparse–low-rank (HSLR) data format, offering a concise API for loading custom problem data, configuring solver parameters, and executing experiments end-to-end. Crucially, it incorporates HSLR structure directly into the solving framework, substantially improving scalability in both memory footprint and computational efficiency. Built-in examples—including matrix completion and maximum independent set SDP relaxations—demonstrate robustness and high performance on large-scale instances. Experimental results confirm that cuHALLaR delivers both numerical stability and speedup over CPU-based alternatives. This work provides an efficient, user-friendly, and ecosystem-native Julia solution for SDP research and practical applications.
📝 Abstract
We present a Julia-based interface to the precompiled HALLaR and cuHALLaR binaries for large-scale semidefinite programs (SDPs). Both solvers are established as fast and numerically stable, and accept problem data in formats compatible with SDPA and a new enhanced data format taking advantage of Hybrid Sparse Low-Rank (HSLR) structure. The interface allows users to load custom data files, configure solver options, and execute experiments directly from Julia. A collection of example problems is included, including the SDP relaxations of the Matrix Completion and Maximum Stable Set problems.