Grounding Synthetic Data Generation With Vision and Language Models

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing synthetic data evaluation metrics, which rely on opaque latent features and exhibit weak correlation with downstream task performance. To overcome this, the authors propose a novel vision–language joint framework that integrates generative modeling, semantic segmentation, and image captioning. The framework introduces, for the first time, a cross-modal consistency validation mechanism that enables interpretable and automated assessment of synthetic remote sensing data through semantic composition analysis and minimization of descriptive redundancy. The study constructs ARAS400k, a large-scale remote sensing dataset comprising 400,000 samples. Experimental results demonstrate that joint training with both synthetic and real data significantly outperforms baseline models trained exclusively on real data.

Technology Category

Application Category

📝 Abstract
Deep learning models benefit from increasing data diversity and volume, motivating synthetic data augmentation to improve existing datasets. However, existing evaluation metrics for synthetic data typically calculate latent feature similarity, which is difficult to interpret and does not always correlate with the contribution to downstream tasks. We propose a vision-language grounded framework for interpretable synthetic data augmentation and evaluation in remote sensing. Our approach combines generative models, semantic segmentation and image captioning with vision and language models. Based on this framework, we introduce ARAS400k: A large-scale Remote sensing dataset Augmented with Synthetic data for segmentation and captioning, containing 100k real images and 300k synthetic images, each paired with segmentation maps and descriptions. ARAS400k enables the automated evaluation of synthetic data by analyzing semantic composition, minimizing caption redundancy, and verifying cross-modal consistency between visual structures and language descriptions. Experimental results indicate that while models trained exclusively on synthetic data reach competitive performance levels, those trained with augmented data (a combination of real and synthetic images) consistently outperform real-data baselines. Consequently, this work establishes a scalable benchmark for remote sensing tasks, specifically in semantic segmentation and image captioning. The dataset is available at zenodo.org/records/18890661 and the code base at github.com/caglarmert/ARAS400k.
Problem

Research questions and friction points this paper is trying to address.

synthetic data evaluation
vision-language grounding
remote sensing
semantic segmentation
image captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

synthetic data generation
vision-language models
remote sensing
semantic segmentation
image captioning