🤖 AI Summary
This work addresses the multi-level low-rank (MLR) matrix approximation problem under the Frobenius norm, tackling three core challenges: hierarchical structural partitioning (row/column stratification), rank allocation (optimizing individual block ranks under a total storage budget), and joint factor fitting. We propose the first end-to-end joint optimization framework for MLR matrices, unifying structural design, rank assignment, and factor learning within a single model. Our approach employs hierarchical block-diagonal parameterization, alternating optimization, and a constrained rank allocation algorithm to achieve coordinated optimization. The resulting approximation preserves matrix-vector multiplication complexity at O(n). Empirical evaluation on multiple benchmark datasets shows that our method reduces approximation error by 35% on average compared to single-level low-rank baselines, significantly improving both accuracy and storage efficiency. The implementation is publicly available.
📝 Abstract
We consider multilevel low rank (MLR) matrices, defined as a row and column permutation of a sum of matrices, each one a block diagonal refinement of the previous one, with all blocks low rank given in factored form. MLR matrices extend low rank matrices but share many of their properties, such as the total storage required and complexity of matrix-vector multiplication. We address three problems that arise in fitting a given matrix by an MLR matrix in the Frobenius norm. The first problem is factor fitting, where we adjust the factors of the MLR matrix. The second is rank allocation, where we choose the ranks of the blocks in each level, subject to the total rank having a given value, which preserves the total storage needed for the MLR matrix. The final problem is to choose the hierarchical partition of rows and columns, along with the ranks and factors. This paper is accompanied by an open source package that implements the proposed methods.