🤖 AI Summary
Existing time-series generative models suffer from poor generalization, particularly across heterogeneous domains, limiting their utility in data augmentation and privacy-preserving applications. To address this, we propose a domain-aware diffusion-based framework for multi-domain time-series generation. Our approach introduces—first in the literature—a time-series semantic prototype module and a prototype allocation mechanism, which disentangle temporal features into compositional, interpretable “tokens” to enable explicit modeling and transfer of domain-specific knowledge. Leveraging few-shot prompt extraction and domain-adaptive conditional generation, our model achieves state-of-the-art in-domain generation quality across multiple real-world benchmarks. Moreover, it significantly improves zero-shot and few-shot cross-domain generation performance on unseen domains. This work establishes a new paradigm for general-purpose time-series generation, advancing both interpretability and cross-domain adaptability.
📝 Abstract
Time series generation models are crucial for applications like data augmentation and privacy preservation. Most existing time series generation models are typically designed to generate data from one specified domain. While leveraging data from other domain for better generalization is proved to work in other application areas, this approach remains challenging for time series modeling due to the large divergence in patterns among different real world time series categories. In this paper, we propose a multi-domain time series diffusion model with domain prompts, named TimeDP. In TimeDP, we utilize a time series semantic prototype module which defines time series prototypes to represent time series basis, each prototype vector serving as"word"representing some elementary time series feature. A prototype assignment module is applied to extract the extract domain specific prototype weights, for learning domain prompts as generation condition. During sampling, we extract"domain prompt"with few-shot samples from the target domain and use the domain prompts as condition to generate time series samples. Experiments demonstrate that our method outperforms baselines to provide the state-of-the-art in-domain generation quality and strong unseen domain generation capability.