๐ค AI Summary
This work addresses the limitations of existing web information extraction datasets, which are often small-scale, synthetically generated, or lack authentic webpage structural context. To this end, the authors present the first large-scale, real-world dataset for LLM-based web information extraction, comprising 93,695 diverse samples spanning multiple domains and languages. Each instance includes Markdown-formatted webpage content, a prompt, a JSON schema, the modelโs response, and metadata capturing extraction complexity. Constructed from user-consented telemetry data and refined through deduplication and schema balancing, the dataset supports benchmarking structured extraction, fine-tuning smaller models, and schema induction research. Experiments demonstrate that a 1.7B-parameter model fine-tuned on a subset of this data substantially narrows the performance gap with a 30B-parameter model, confirming the datasetโs effectiveness in enabling efficient information extraction.
๐ Abstract
The use of large language models for web information extraction is becoming increasingly fundamental to modern web information retrieval pipelines. However, existing datasets tend to be small, synthetic or text-only, failing to capture the structural context of the web. We introduce ScrapeGraphAI-100k, a large-scale dataset comprising real-world LLM extraction events, collected via opt-in ScrapeGraphAI telemetry during Q2 and Q3 of 2025. Starting from 9M events, we deduplicate and balance by schema to produce 93,695 examples spanning diverse domains and languages. Each instance includes Markdown content, a prompt, a JSON schema, the LLM response, and complexity/validation metadata. We characterize the datasets structural diversity and its failure modes as schema complexity increases. We also provide a fine-tuning experiment showing that a small language model (1.7B) trained on a subset narrows the gap to larger baselines (30B), underscoring the datasets utility for efficient extraction. ScrapeGraphAI-100k enables fine-tuning small models, benchmarking structured extraction, and studying schema induction for web IR indexing, and is publicly available on HuggingFace.