๐ค AI Summary
To address the high computational cost and weak structural awareness in large-scale high-order tensor recovery, this paper proposes a fast low-rank approximation algorithm that integrates Krylov subspace iteration with randomized projection, and establishes the first generalized nonconvex regularization modeling framework. Methodologically, it unifies randomized low-rank approximation and nonconvex regularization within a single model, theoretically derives an upper bound on approximation error, and supports diverse recovery tasksโincluding quantized and unquantized settings. The approach employs block Lanczos bidiagonalization coupled with adaptive optimization, achieving high accuracy while substantially reducing memory footprint and time complexity. Experiments demonstrate that the method consistently outperforms state-of-the-art approaches across multiple large-scale tensor datasets, exhibiting both real-time efficiency and strong scalability.
๐ Abstract
Currently, existing tensor recovery methods fail to recognize the impact of tensor scale variations on their structural characteristics. Furthermore, existing studies face prohibitive computational costs when dealing with large-scale high-order tensor data. To alleviate these issue, assisted by the Krylov subspace iteration, block Lanczos bidiagonalization process, and random projection strategies, this article first devises two fast and accurate randomized algorithms for low-rank tensor approximation (LRTA) problem. Theoretical bounds on the accuracy of the approximation error estimate are established. Next, we develop a novel generalized nonconvex modeling framework tailored to large-scale tensor recovery, in which a new regularization paradigm is exploited to achieve insightful prior representation for large-scale tensors. On the basis of the above, we further investigate new unified nonconvex models and efficient optimization algorithms, respectively, for several typical high-order tensor recovery tasks in unquantized and quantized situations. To render the proposed algorithms practical and efficient for large-scale tensor data, the proposed randomized LRTA schemes are integrated into their central and time-intensive computations. Finally, we conduct extensive experiments on various large-scale tensors, whose results demonstrate the practicability, effectiveness and superiority of the proposed method in comparison with some state-of-the-art approaches.