🤖 AI Summary
This work addresses the limited reliability of large language models (LLMs) in quantitative knowledge retrieval. To this end, we propose a novel Bayesian workflow-oriented paradigm that supports principled prior distribution construction and missing data imputation. Methodologically, our approach integrates an LLM interface combining prompt engineering with uncertainty calibration, a structured prior elicitation framework, and a multi-round consistency verification mechanism—enabling joint expert knowledge distillation and missing-value reasoning. We present the first systematic evaluation of LLMs’ robustness and interpretability in quantitative knowledge retrieval. Experiments across real-world datasets from healthcare, environmental science, and engineering domains demonstrate that our method improves prediction accuracy by 12.3% on average, reduces reliance on labeled data by 40%, and substantially enhances the practicality and generalizability of Bayesian analysis.
📝 Abstract
Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences, however their utility for quantitative information retrieval is less well understood. Here we explore the feasibility of LLMs as a mechanism for quantitative knowledge retrieval to aid two data analysis tasks: elicitation of prior distributions for Bayesian models and imputation of missing data. We introduce a framework that leverages LLMs to enhance Bayesian workflows by eliciting expert-like prior knowledge and imputing missing data. Tested on diverse datasets, this approach can improve predictive accuracy and reduce data requirements, offering significant potential in healthcare, environmental science and engineering applications. We discuss the implications and challenges of treating LLMs as 'experts'.