🤖 AI Summary
To address the challenge of constructing interpretable models for few-shot tabular data, this paper proposes Zero-shot Decision Trees (ZDT): a framework that generates structurally sound and semantically coherent decision trees—without any training data or fine-tuning—by leveraging the implicit world knowledge encoded in large language models (LLMs) via prompt engineering, and subsequently extracts topological embeddings from the generated trees. The method comprises symbolic parsing of tree structures and learnable tree embedding encoding. To our knowledge, this is the first work achieving pure zero-shot decision tree generation and embedding learning. Experiments on multiple small-scale tabular benchmarks demonstrate that ZDT achieves significantly higher accuracy than conventional data-driven decision trees. Moreover, its learned embeddings consistently outperform those derived from XGBoost and other tree-based models in downstream tasks, establishing a novel, knowledge-driven, and highly interpretable paradigm for low-resource scenarios.
📝 Abstract
Large language models (LLMs) provide powerful means to leverage prior knowledge for predictive modeling when data is limited. In this work, we demonstrate how LLMs can use their compressed world knowledge to generate intrinsically interpretable machine learning models, i.e., decision trees, without any training data. We find that these zero-shot decision trees can even surpass data-driven trees on some small-sized tabular datasets and that embeddings derived from these trees perform better than data-driven tree-based embeddings on average. Our decision tree induction and embedding approaches can therefore serve as new knowledge-driven baselines for data-driven machine learning methods in the low-data regime. Furthermore, they offer ways to harness the rich world knowledge within LLMs for tabular machine learning tasks. Our code and results are available at https://github.com/ml-lab-htw/llm-trees.