Can Transformers Learn Full Bayesian Inference in Context?

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether Transformers can perform full Bayesian inference via in-context learning (ICL), bypassing parameter fine-tuning. To address complex statistical structures—including linear models and latent factor models—we propose a general framework that integrates a fitting network with continuous normalizing flows (CNFs), enabling context-driven posterior distribution modeling and high-fidelity sample generation. We provide the first empirical evidence that, without any parameter updates, Transformers can execute complete Bayesian inference directly from contextual data, yielding posterior samples whose quality matches state-of-the-art MCMC and variational inference methods. This bridges deep learning with principled Bayesian statistics, establishing a new paradigm for equipping large language models with interpretable, probabilistically rigorous statistical reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Transformers have emerged as the dominant architecture in the field of deep learning, with a broad range of applications and remarkable in-context learning (ICL) capabilities. While not yet fully understood, ICL has already proved to be an intriguing phenomenon, allowing transformers to learn in context -- without requiring further training. In this paper, we further advance the understanding of ICL by demonstrating that transformers can perform full Bayesian inference for commonly used statistical models in context. More specifically, we introduce a general framework that builds on ideas from prior fitted networks and continuous normalizing flows which enables us to infer complex posterior distributions for methods such as generalized linear models and latent factor models. Extensive experiments on real-world datasets demonstrate that our ICL approach yields posterior samples that are similar in quality to state-of-the-art MCMC or variational inference methods not operating in context.
Problem

Research questions and friction points this paper is trying to address.

Transformer models
statistical inference
Bayesian reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer models
In-context Learning (ICL)
Bayesian inference
🔎 Similar Papers
No similar papers found.