🤖 AI Summary
To address the excessive storage and training overhead of graph representation learning on large-scale relational databases (RDBs), this paper proposes the Relational Database Distillation (RDD) framework, which compresses multi-table RDBs into compact heterogeneous graphs while preserving predictive utility. Methodologically, RDD avoids costly bilevel optimization via a kernel ridge regression–guided objective; enhances cross-task generalization through pseudo-labeling; and integrates multimodal column encoding, primary-foreign key structural modeling, and feature-level distillation. Experiments on multiple real-world databases demonstrate that RDD reduces graph size by an average of 72% while maintaining model performance on classification and regression tasks comparable to that achieved on the original databases. The framework thus establishes a scalable new paradigm for efficient graph learning over relational data.
📝 Abstract
Relational databases (RDBs) underpin the majority of global data management systems, where information is structured into multiple interdependent tables. To effectively use the knowledge within RDBs for predictive tasks, recent advances leverage graph representation learning to capture complex inter-table relations as multi-hop dependencies. Despite achieving state-of-the-art performance, these methods remain hindered by the prohibitive storage overhead and excessive training time, due to the massive scale of the database and the computational burden of intensive message passing across interconnected tables. To alleviate these concerns, we propose and study the problem of Relational Database Distillation (RDD). Specifically, we aim to distill large-scale RDBs into compact heterogeneous graphs while retaining the predictive power (i.e., utility) required for training graph-based models. Multi-modal column information is preserved through node features, and primary-foreign key relations are encoded via heterogeneous edges, thereby maintaining both data fidelity and relational structure. To ensure adaptability across diverse downstream tasks without engaging the traditional, inefficient bi-level distillation framework, we further design a kernel ridge regression-guided objective with pseudo-labels, which produces quality features for the distilled graph. Extensive experiments on multiple real-world RDBs demonstrate that our solution substantially reduces the data size while maintaining competitive performance on classification and regression tasks, creating an effective pathway for scalable learning with RDBs.