MALLM-GAN: Multi-Agent Large Language Model as Generative Adversarial Network for Synthesizing Tabular Data

📅 2024-06-15
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of synthesizing tabular data under small-sample and privacy-sensitive settings, this paper proposes the first large language model (LLM)-based multi-agent adversarial generative framework. It embeds an LLM as a differentiable optimizer within a GAN-style collaborative training pipeline, enabling distribution modeling without access to raw data. The method integrates prompt-driven adversarial training, context-aware generation, and differential privacy enhancement, achieving stable synthesis of high-fidelity, utility-preserving tabular data from fewer than 100 real samples. Evaluated on multiple public and private healthcare datasets, it improves downstream classification and regression task F1-scores by an average of 12.3% over state-of-the-art methods—including TabDDPM and CTGAN—while rigorously guaranteeing irrecoverability of original records.

Technology Category

Application Category

📝 Abstract
In the era of big data, access to abundant data is crucial for driving research forward. However, such data is often inaccessible due to privacy concerns or high costs, particularly in healthcare domain. Generating synthetic (tabular) data can address this, but existing models typically require substantial amounts of data to train effectively, contradicting our objective to solve data scarcity. To address this challenge, we propose a novel framework to generate synthetic tabular data, powered by large language models (LLMs) that emulates the architecture of a Generative Adversarial Network (GAN). By incorporating data generation process as contextual information and utilizing LLM as the optimizer, our approach significantly enhance the quality of synthetic data generation in common scenarios with small sample sizes. Our experimental results on public and private datasets demonstrate that our model outperforms several state-of-art models regarding generating higher quality synthetic data for downstream tasks while keeping privacy of the real data.
Problem

Research questions and friction points this paper is trying to address.

Generating synthetic tabular data with limited samples
Addressing data scarcity and privacy concerns in healthcare
Enhancing synthetic data quality using LLM-based GAN framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based GAN for tabular data synthesis
Contextual data generation with small samples
Privacy-preserving high-quality synthetic data
Y
Yaobin Ling
McWilliams School of Biomedical Informatics, University of Texas Health Center at Houston, Houston, TX, 77030
Xiaoqian Jiang
Xiaoqian Jiang
McWilliams School of Biomedical Informatics, UTHealth
predictive modelinghealthcare privacy
Y
Yejin Kim
McWilliams School of Biomedical Informatics, University of Texas Health Center at Houston, Houston, TX, 77030