Accelerating Large-Scale Regularized High-Order Tensor Recovery

๐Ÿ“… 2025-06-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational cost and weak structural awareness in large-scale high-order tensor recovery, this paper proposes a fast low-rank approximation algorithm that integrates Krylov subspace iteration with randomized projection, and establishes the first generalized nonconvex regularization modeling framework. Methodologically, it unifies randomized low-rank approximation and nonconvex regularization within a single model, theoretically derives an upper bound on approximation error, and supports diverse recovery tasksโ€”including quantized and unquantized settings. The approach employs block Lanczos bidiagonalization coupled with adaptive optimization, achieving high accuracy while substantially reducing memory footprint and time complexity. Experiments demonstrate that the method consistently outperforms state-of-the-art approaches across multiple large-scale tensor datasets, exhibiting both real-time efficiency and strong scalability.

Technology Category

Application Category

๐Ÿ“ Abstract
Currently, existing tensor recovery methods fail to recognize the impact of tensor scale variations on their structural characteristics. Furthermore, existing studies face prohibitive computational costs when dealing with large-scale high-order tensor data. To alleviate these issue, assisted by the Krylov subspace iteration, block Lanczos bidiagonalization process, and random projection strategies, this article first devises two fast and accurate randomized algorithms for low-rank tensor approximation (LRTA) problem. Theoretical bounds on the accuracy of the approximation error estimate are established. Next, we develop a novel generalized nonconvex modeling framework tailored to large-scale tensor recovery, in which a new regularization paradigm is exploited to achieve insightful prior representation for large-scale tensors. On the basis of the above, we further investigate new unified nonconvex models and efficient optimization algorithms, respectively, for several typical high-order tensor recovery tasks in unquantized and quantized situations. To render the proposed algorithms practical and efficient for large-scale tensor data, the proposed randomized LRTA schemes are integrated into their central and time-intensive computations. Finally, we conduct extensive experiments on various large-scale tensors, whose results demonstrate the practicability, effectiveness and superiority of the proposed method in comparison with some state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Addressing high computational costs in large-scale tensor recovery
Developing fast randomized algorithms for low-rank tensor approximation
Creating a nonconvex framework for efficient tensor prior representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized algorithms for low-rank tensor approximation
Generalized nonconvex modeling framework for tensor recovery
Integration of randomized LRTA into optimization algorithms
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Wenjin Qin
School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
Hailin Wang
Hailin Wang
Southwestern University of Finance and Economics
data miningmachine learningnlpinformation extractionrelation extraction
J
Jingyao Hou
School of Mathematics and Information, China West Normal University, Nanchong 637009, China
Jianjun Wang
Jianjun Wang
SWU Prof.
Machine Learning