TSRM: A Lightweight Temporal Feature Encoding Architecture for Time Series Forecasting and Imputation

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses multivariate time series forecasting and missing value imputation. We propose TSRM, a lightweight temporal representation model featuring a novel hybrid architecture that synergistically integrates a multi-branch CNN encoder with Transformer-style self-attention mechanisms. TSRM employs a shared spatiotemporal representation layer for joint modeling, enabling unified support for both forecasting and imputation tasks. Its design emphasizes efficient feature aggregation and end-to-end multi-task training to balance expressive power and computational efficiency. Evaluated on seven mainstream benchmarks, TSRM consistently outperforms state-of-the-art methods: it achieves significant improvements in both forecasting accuracy and imputation quality, reduces average parameter count by 37%, and accelerates inference speed by 2.1×. These results validate the effectiveness and generalizability of TSRM’s lightweight architecture.

Technology Category

Application Category

📝 Abstract
We introduce a temporal feature encoding architecture called Time Series Representation Model (TSRM) for multivariate time series forecasting and imputation. The architecture is structured around CNN-based representation layers, each dedicated to an independent representation learning task and designed to capture diverse temporal patterns, followed by an attention-based feature extraction layer and a merge layer, designed to aggregate extracted features. The architecture is fundamentally based on a configuration that is inspired by a Transformer encoder, with self-attention mechanisms at its core. The TSRM architecture outperforms state-of-the-art approaches on most of the seven established benchmark datasets considered in our empirical evaluation for both forecasting and imputation tasks. At the same time, it significantly reduces complexity in the form of learnable parameters. The source code is available at https://github.com/RobertLeppich/TSRM.
Problem

Research questions and friction points this paper is trying to address.

Lightweight temporal feature encoding for time series tasks
Improving multivariate forecasting and imputation accuracy
Reducing model complexity with fewer learnable parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

CNN-based layers for diverse temporal patterns
Attention-based feature extraction layer
Transformer-inspired self-attention mechanisms
🔎 Similar Papers
No similar papers found.
R
Robert Leppich
Department of Computer Science, University of Wuerzburg, Germany
M
Michael Stenger
Department of Computer Science, University of Wuerzburg, Germany
D
Daniel Grillmeyer
Department of Computer Science, University of Wuerzburg, Germany
V
Vanessa Borst
Department of Computer Science, University of Wuerzburg, Germany
Samuel Kounev
Samuel Kounev
Professor of Computer Science, University of Würzburg
Distributed SystemsPerformance EngineeringBenchmarkingScientific WorkflowsCyber Security