An Efficient and Interpretable Autoregressive Model for High-Dimensional Tensor-Valued Time Series

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-dimensional tensor-valued time series autoregression faces a trade-off: Tucker decomposition offers strong interpretability but suffers from computational inefficiency, whereas CP decomposition is efficient yet lacks clear statistical interpretability; moreover, strict low-rank assumptions often lead to model misspecification. Method: We propose a CP-decomposition-driven supervised factor regression framework: first extracting low-dimensional CP features from both responses and covariates, then modeling their mapping via a structured coefficient tensor incorporating low-rank plus sparse components to accommodate heterogeneous signals. Contribution/Results: Our approach innovatively integrates CP decomposition with supervised factor modeling to yield an interpretable, structured coefficient tensor. We establish non-asymptotic theoretical guarantees for estimation consistency and accuracy. Extensive simulations and an ENSO forecasting application demonstrate that the method achieves significantly improved predictive performance while preserving strong interpretability.

Technology Category

Application Category

📝 Abstract
In autoregressive modeling for tensor-valued time series, Tucker decomposition, when applied to the coefficient tensor, provides a clear interpretation of supervised factor modeling but loses its efficiency rapidly with increasing tensor order. Conversely, canonical polyadic (CP) decomposition maintains efficiency but lacks a precise statistical interpretation. To attain both interpretability and powerful dimension reduction, this paper proposes a novel approach under the supervised factor modeling paradigm, which first uses CP decomposition to extract response and covariate features separately and then regresses response features on covariate ones. This leads to a new CP-based low-rank structure for the coefficient tensor. Furthermore, to address heterogeneous signals or potential model misspecifications arising from stringent low-rank assumptions, a low-rank plus sparse model is introduced by incorporating an additional sparse coefficient tensor. Nonasymptotic properties are established for the ordinary least squares estimators, and an alternating least squares algorithm is introduced for optimization. Theoretical properties of the proposed methodology are validated by simulation studies, and its enhanced prediction performance and interpretability are demonstrated by the El Ni$ ilde{ ext{n}}$o-Southern Oscillation example.
Problem

Research questions and friction points this paper is trying to address.

Balancing interpretability and efficiency in tensor autoregressive models
Addressing heterogeneous signals in low-rank tensor time series
Improving prediction performance with interpretable factor modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

CP decomposition for feature extraction
Low-rank plus sparse coefficient tensor
Alternating least squares algorithm optimization
🔎 Similar Papers
No similar papers found.
Y
Yuxi Cai
Department of Statistics and Actuarial Science, University of Hong Kong
Lan Li
Lan Li
University of North Carolina at Chapel Hill
future of workdigital laborAI and work
Y
Yize Wang
Department of Statistics and Actuarial Science, University of Hong Kong
G
Guodong Li
Department of Statistics and Actuarial Science, University of Hong Kong