LLM Empowered Prototype Learning for Zero and Few-Shot Tasks on Tabular Data

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of modeling tabular data in zero-shot and few-shot learning scenarios. We propose a training-free prototypical estimation framework leveraging large language models (LLMs), which eliminates reliance on in-context examples. Our method introduces a novel “sample-free prompting mechanism” that directly generates feature-level prototypes from task and feature descriptions alone. These prototypes are then refined via prototype enhancement using only a small number of labeled samples—without fine-tuning the LLM or training any downstream classifier. The core innovation lies in decoupling prototype construction from instance-based demonstrations, enabling description-driven, scalable, and robust zero/few-shot tabular learning. Extensive experiments on multiple benchmark datasets demonstrate substantial improvements in classification accuracy, validating the framework’s strong generalization capability and practical deployability.

Technology Category

Application Category

📝 Abstract
Recent breakthroughs in large language models (LLMs) have opened the door to in-depth investigation of their potential in tabular data modeling. However, effectively utilizing advanced LLMs in few-shot and even zero-shot scenarios is still challenging. To this end, we propose a novel LLM-based prototype estimation framework for tabular learning. Our key idea is to query the LLM to generate feature values based example-free prompt, which solely relies on task and feature descriptions. With the feature values generated by LLM, we can build a zero-shot prototype in a training-free manner, which can be further enhanced by fusing few-shot samples, avoiding training a classifier or finetuning the LLMs. Thanks to the example-free prompt and prototype estimation, ours bypasses the constraints brought by the example-based prompt, providing a scalable and robust framework. Extensive experiments demonstrate the effectiveness of ours in zero and few-shot tabular learning.
Problem

Research questions and friction points this paper is trying to address.

Utilizing LLMs for zero-shot tabular data modeling
Generating feature values without example-based prompts
Enhancing prototypes by fusing few-shot samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based prototype estimation framework
Example-free prompt for feature generation
Training-free zero-shot prototype construction
🔎 Similar Papers
No similar papers found.
P
Peng Wang
School of Artificial Intelligence, Jilin University, China
D
Dongsheng Wang
College of Computer Science and Software Engineering, Shenzhen University, China
H
He Zhao
CSIRO’s Data61 and Monash University, Australia
Hangting Ye
Hangting Ye
Jilin University
Machine LearningData Mining
D
Dandan Guo
School of Artificial Intelligence, Jilin University, China
Y
Yi Chang
School of Artificial Intelligence, Jilin University, China and Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, Ministry of Education, China and International Center of Future Science, Jilin University, China