Forging Time Series with Language: A Large Language Model Approach to Synthetic Data Generation

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously preserving statistical properties, dynamic patterns, and generation flexibility in multivariate time series synthesis. We propose SDForger, a novel framework that introduces a pioneering “time series → text” compact representation paradigm: raw multivariate sequences are mapped to structured textual embeddings, enabling efficient adaptation of autoregressive LLMs (e.g., Llama, Phi) via tabular embedding and lightweight LoRA fine-tuning under few-shot settings. SDForger supports text-conditioned generation, establishing a multimodal modeling pathway bridging time series and natural language. Extensive experiments across multiple benchmark datasets demonstrate that SDForger achieves state-of-the-art performance in distributional similarity, dynamic fidelity, and downstream forecasting accuracy—significantly outperforming existing time series generative models.

Technology Category

Application Category

📝 Abstract
SDForger is a flexible and efficient framework for generating high-quality multivariate time series using LLMs. Leveraging a compact data representation, SDForger provides synthetic time series generation from a few samples and low-computation fine-tuning of any autoregressive LLM. Specifically, the framework transforms univariate and multivariate signals into tabular embeddings, which are then encoded into text and used to fine-tune the LLM. At inference, new textual embeddings are sampled and decoded into synthetic time series that retain the original data's statistical properties and temporal dynamics. Across a diverse range of datasets, SDForger outperforms existing generative models in many scenarios, both in similarity-based evaluations and downstream forecasting tasks. By enabling textual conditioning in the generation process, SDForger paves the way for multimodal modeling and the streamlined integration of time series with textual information. SDForger source code will be open-sourced soon.
Problem

Research questions and friction points this paper is trying to address.

Generating synthetic multivariate time series using LLMs
Transforming signals into text embeddings for LLM fine-tuning
Enhancing time series generation with textual conditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based synthetic time series generation
Compact tabular embeddings for data representation
Textual conditioning for multimodal integration
🔎 Similar Papers
No similar papers found.
C
Cécile Rousseau
IBM Research Europe, Dublin
Tobia Boschi
Tobia Boschi
IBM Research Europe
Statistics
G
Giandomenico Cornacchia
IBM Research Europe, Dublin
D
Dhaval Salwala
IBM Research Europe, Dublin
A
Alessandra Pascale
IBM Research Europe, Dublin
J
Juan Bernabe Moreno
IBM Research Europe, Dublin