🤖 AI Summary
This work addresses the challenge of enabling base large language models to approximate supervised fine-tuning (SFT) performance at inference time—without any parameter updates or SFT.
Method: We propose a parameter-free approach grounded in in-context learning (ICL), retrieval-augmented generation, and probabilistic generalization theory.
Contribution/Results: We provide the first theoretical proof that Transformer-based base models can approximate arbitrary SFT policies using only a finite context window and a bounded number of in-context examples. Leveraging the Turing completeness of Transformers, we derive sample complexity upper bounds for both text generation and linear classification tasks. Our analysis establishes the first solution that simultaneously offers rigorous theoretical guarantees and practical deployment feasibility—enabling low-overhead, fine-tuning-free model adaptation.
📝 Abstract
Large language models have transformed natural language processing, yet supervised fine-tuning (SFT) remains computationally intensive. This paper formally proves that capabilities acquired through SFT can be approximated by a base transformer model using inference-time techniques, specifically in-context learning (ICL), without altering model parameters, under idealized assumptions including unbounded computational resources and access to the fine-tuning dataset. We extend these results to practical scenarios with finite context lengths and partial dataset access. For text generation tasks with fixed output length $l$, datasets of size $mathrm{O}left( frac{m V}{varepsilon^2} log frac{m}{delta}
ight)$ or, with bounded context, $mathrm{O}left( frac{l log V}{varepsilon^2} log frac{1}{delta}
ight)$ suffice to approximate fine-tuned behavior across $m$ contexts within error $varepsilon$, where $V$ is the vocabulary size and $delta$ is the failure probability. For linear classification, datasets of size $mathrm{O}left( frac{d}{varepsilon}
ight)$ or, with fixed context, $mathrm{O}left( frac{1}{varepsilon^2} log frac{1}{delta}
ight)$ are sufficient, where $d$ is the input dimension. Grounded in the Turing completeness of transformers, these results provide a theoretical foundation for resource-efficient deployment of large language models, with practical techniques like retrieval-augmented generation bridging theory to real-world applications.