🤖 AI Summary
In zero-shot probabilistic prediction on tabular data, users lack reliable means to anticipate large language models’ (LLMs) performance on specific tasks without ground-truth labels.
Method: We propose a task-level performance prediction framework that requires no labeled data. It extracts unsupervised meta-features from the model’s own outputs—such as predicted probability distributions and confidence consistency—and leverages large-scale empirical analysis coupled with task-level meta-modeling to quantify LLMs’ zero-shot predictive capability.
Contribution/Results: First, we systematically reveal the high performance variability of LLMs across tabular prediction tasks. Second, we introduce generalizable, plug-and-play unsupervised metrics that reliably predict accuracy on unseen tasks (Pearson correlation ≥ 0.72). Third, we empirically validate that raw predicted probabilities serve as strong individual-accuracy signals even for foundational tasks—providing a principled basis for model suitability assessment in zero-shot settings.
📝 Abstract
Recent work has investigated the capabilities of large language models (LLMs) as zero-shot models for generating individual-level characteristics (e.g., to serve as risk models or augment survey datasets). However, when should a user have confidence that an LLM will provide high-quality predictions for their particular task? To address this question, we conduct a large-scale empirical study of LLMs' zero-shot predictive capabilities across a wide range of tabular prediction tasks. We find that LLMs' performance is highly variable, both on tasks within the same dataset and across different datasets. However, when the LLM performs well on the base prediction task, its predicted probabilities become a stronger signal for individual-level accuracy. Then, we construct metrics to predict LLMs' performance at the task level, aiming to distinguish between tasks where LLMs may perform well and where they are likely unsuitable. We find that some of these metrics, each of which are assessed without labeled data, yield strong signals of LLMs' predictive performance on new tasks.