🤖 AI Summary
Testing large language models (LLMs) often incurs high alignment costs and relies heavily on fine-tuning.
Method: This paper proposes a gradient-free test-time alignment method that operates in the pre-logit space: it applies Gaussian perturbations to hidden states preceding the output layer and employs importance sampling within a stochastic model-predictive control framework to dynamically optimize these representations—maximizing expected reward without modifying model parameters.
Contribution/Results: By integrating optimal control principles directly into the decoding process, the method enables efficient sequence re-ranking and generation refinement at test time. Experiments demonstrate that, under identical sample budgets, it significantly outperforms best-of-n sampling and state-of-the-art reward-guided decoding strategies, achieving substantial gains in reward scores. These results validate its strong alignment capability and generalization effectiveness while maintaining low computational overhead.
📝 Abstract
Test-time alignment of large language models (LLMs) attracts attention because fine-tuning LLMs requires high computational costs. In this paper, we propose a new test-time alignment method called adaptive importance sampling on pre-logits (AISP) on the basis of the sampling-based model predictive control with the stochastic control input. AISP applies the Gaussian perturbation into pre-logits, which are outputs of the penultimate layer, so as to maximize expected rewards with respect to the mean of the perturbation. We demonstrate that the optimal mean is obtained by importance sampling with sampled rewards. AISP outperforms best-of-n sampling in terms of rewards over the number of used samples and achieves higher rewards than other reward-based test-time alignment methods.