🤖 AI Summary
Existing dataset discovery methods in data lakes—particularly joinable table identification—lack large-scale, high-fidelity, semantically annotated benchmarks. Method: This paper introduces the first high-quality tabular benchmark generation framework powered by large language models (LLMs), integrating domain knowledge injection and fine-grained semantic relationship modeling to automatically synthesize multi-domain, structurally realistic tabular corpora with human-level column-pair joinability annotations. Contribution/Results: Our framework overcomes key limitations of prior benchmarks—including narrow semantic coverage, domain homogeneity, and annotation distortion. Experiments demonstrate that the generated dataset significantly improves evaluation validity for semantic column matching and cross-table discovery tasks. It establishes a reproducible, scalable assessment infrastructure for data lake governance, enabling rigorous, standardized evaluation of discovery algorithms.
📝 Abstract
How to generate a large, realistic set of tables along with joinability relationships, to stress-test dataset discovery methods? Dataset discovery methods aim to automatically identify related data assets in a data lake. The development and evaluation of such solutions for customers from a wide range of business domains, relies on diverse, high quality and domain-specific tabular benchmarks. Large language models (LLMs) are trained on a wide variety of text data, which can provide a strong foundation of general and domain-specific knowledge. In this paper, we ask the question -- extit{can we leverage LLMs to generate a tabular benchmark adequate for evaluating the dataset discovery solutions?} In particular, we focus on the task of finding joinable tables which is the cornerstone of virtually every dataset discovery method. Current corpora for evaluating dataset discovery methods are mainly based on subsets of open data, and they suffer from three important issues: $i)$ they focus on very common and generic data types (e.g., address, id, name, etc.); $ii)$ they do not contain human-annotated column pairs; instead, practitioners synthesize ground truth using table splits (e.g., horizontal for table union search and vertical ones for joinability) and $iii)$ they do not focus on semantic column relationships.