TiVaT: A Transformer with a Single Unified Mechanism for Capturing Asynchronous Dependencies in Multivariate Time Series Forecasting

📅 2024-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multivariate time series forecasting, existing models struggle to capture complex asynchronous dependencies—such as lead-lag relationships—among variables, limiting predictive accuracy. To address this, we propose TiVaT, a unified time- and variable-aware joint modeling framework. TiVaT introduces a novel Joint-Axis self-attention mechanism that simultaneously models dynamic interactions along both the temporal and variable axes within a single module. It further incorporates a distance-aware 2D joint sampling strategy and learnable time- and variable-dependent positional embeddings, breaking away from conventional channel-separate paradigms. Extensive experiments on multiple benchmark datasets demonstrate that TiVaT consistently outperforms state-of-the-art methods. Notably, under strongly asynchronous scenarios, it achieves an average 12.7% reduction in prediction error, validating its effectiveness and generalizability in modeling asynchronous dependencies.

Technology Category

Application Category

📝 Abstract
Multivariate time series (MTS) forecasting is vital across various domains but remains challenging due to the need to simultaneously model temporal and inter-variate dependencies. Existing channel-dependent models, where Transformer-based models dominate, process these dependencies separately, limiting their capacity to capture complex interactions such as lead-lag dynamics. To address this issue, we propose TiVaT (Time-variate Transformer), a novel architecture incorporating a single unified module, a Joint-Axis (JA) attention module, that concurrently processes temporal and variate modeling. The JA attention module dynamically selects relevant features to particularly capture asynchronous interactions. In addition, we introduce distance-aware time-variate sampling in the JA attention, a novel mechanism that extracts significant patterns through a learned 2D embedding space while reducing noise. Extensive experiments demonstrate TiVaT's overall performance across diverse datasets, particularly excelling in scenarios with intricate asynchronous dependencies.
Problem

Research questions and friction points this paper is trying to address.

Multivariate Time Series Forecasting
Asynchronous Relationships
Leading-Lagging Variables
Innovation

Methods, ideas, or system contributions that make the work stand out.

TiVaT Model
Joint Axis Attention Module
Distance-aware Temporal-Variable Sampling
🔎 Similar Papers
Junwoo Ha
Junwoo Ha
AIM Intelligence
LLM Red-Teaming
H
Hyukjae Kwon
Graduate School of Information, Yonsei University, Seoul, Yonsei-ro 50, South Korea
S
Sungsoo Kim
Graduate School of Information, Yonsei University, Seoul, Yonsei-ro 50, South Korea
K
Kisu Lee
Graduate School of Information, Yonsei University, Seoul, Yonsei-ro 50, South Korea
S
Seungjae Park
H
Ha Young Kim
Graduate School of Information, Yonsei University, Seoul, Yonsei-ro 50, South Korea