A Diffusion Model for Regular Time Series Generation from Irregular Data with Completion and Masking

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative methods for irregularly sampled time series with missing values suffer from significant modeling bias, high computational overhead, and distorted neighborhood structures. This paper proposes a two-stage generative framework: first, a time-series Transformer performs structure-preserving imputation on the raw irregular sequence to construct natural spatiotemporal neighborhoods; second, the imputed sequence is converted into image-like representations and processed by a masked visual diffusion model (e.g., ImagenTime), reducing reliance on imputation accuracy while enhancing robustness. Crucially, this design achieves the first decoupled yet synergistic optimization of imputation guidance and generative modeling. Experiments demonstrate state-of-the-art performance across all key metrics: 70% improvement in generation quality (measured by discriminative score), 85% reduction in computational cost, and superior robustness under varying missingness patterns and sampling irregularity.

Technology Category

Application Category

📝 Abstract
Generating realistic time series data is critical for applications in healthcare, finance, and science. However, irregular sampling and missing values present significant challenges. While prior methods address these irregularities, they often yield suboptimal results and incur high computational costs. Recent advances in regular time series generation, such as the diffusion-based ImagenTime model, demonstrate strong, fast, and scalable generative capabilities by transforming time series into image representations, making them a promising solution. However, extending ImagenTime to irregular sequences using simple masking introduces "unnatural" neighborhoods, where missing values replaced by zeros disrupt the learning process. To overcome this, we propose a novel two-step framework: first, a Time Series Transformer completes irregular sequences, creating natural neighborhoods; second, a vision-based diffusion model with masking minimizes dependence on the completed values. This approach leverages the strengths of both completion and masking, enabling robust and efficient generation of realistic time series. Our method achieves state-of-the-art performance, achieving a relative improvement in discriminative score by $70%$ and in computational cost by $85%$. Code is at https://github.com/azencot-group/ImagenI2R.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic time series from irregular data
Overcoming unnatural neighborhoods in masked diffusion models
Reducing computational costs while improving generation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer completes irregular time series data
Vision diffusion model minimizes completed value dependence
Two-step framework combines completion and masking techniques
G
Gal Fadlon
Department of Computer Science, Ben-Gurion University of The Negev
I
Idan Arbiv
Department of Computer Science, Ben-Gurion University of The Negev
Nimrod Berman
Nimrod Berman
Ben Gurion University
Deep Learning
Omri Azencot
Omri Azencot
Senior Lecturer (Assistant Professor) of Computer Science, BGU
Machine LearningRepresentation LearningGenerative ModelingSequential Modeling