Kad: A Framework for Proxy-based Test-time Alignment with Knapsack Approximation Deferral

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the rapidly escalating alignment cost of large language models (LLMs) with scale, this paper proposes a test-time alignment framework leveraging a small-scale proxy model. Methodologically, it formulates token-level alignment delay decisions as a 0–1 knapsack problem—the first such formulation—and designs a primal-dual approximation algorithm for efficient solving. The framework integrates proxy-model guidance, token-level cascaded alignment, and speculative decoding to achieve computationally efficient inference. Experiments demonstrate that the method maintains downstream task performance while significantly improving speculative decoding throughput and reducing alignment computation overhead by 37%–52%. This work establishes a novel paradigm for lightweight LLM alignment, enabling scalable and cost-effective deployment without sacrificing fidelity.

Technology Category

Application Category

📝 Abstract
Several previous works concluded that the largest part of generation capabilities of large language models (LLM) are learned (early) during pre-training. However, LLMs still require further alignment to adhere to downstream task requirements and stylistic preferences, among other desired properties. As LLMs continue to scale in terms of size, the computational cost of alignment procedures increase prohibitively. In this work, we propose a novel approach to circumvent these costs via proxy-based test-time alignment, i.e. using guidance from a small aligned model. Our approach can be described as token-specific cascading method, where the token-specific deferral rule is reduced to 0-1 knapsack problem. In this setting, we derive primal and dual approximations of the optimal deferral decision. We experimentally show the benefits of our method both in task performance and speculative decoding speed.
Problem

Research questions and friction points this paper is trying to address.

Aligning large language models efficiently with downstream tasks
Reducing computational costs of alignment through proxy models
Solving token deferral as knapsack problem for optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proxy-based test-time alignment with small models
Token-specific deferral via knapsack approximation
Dual approximations for optimal deferral decisions
🔎 Similar Papers
No similar papers found.